» Blog Where technology history is remembered Sat, 25 Jan 2014 04:26:26 +0000 en-US hourly 1 iBeacon Opens Door for iWallet Sun, 08 Dec 2013 22:00:03 +0000

The original address for this post is iBeacon Opens Door for iWallet. If you're reading it on another site, please stop by and visit.

Apple is quietly rolling out a payment revolution

is the founder and CEO of Keepskor and writes, where this was initially posted under the title iBeacon Opens Door for iWallet. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is iBeacon Opens Door for iWallet. If you're reading it on another site, please stop by and visit.

This week, Apple unveiled the first large-scale integration of iBeacon, a technology that, when coupled with their fingerprint identification system, may be another building block in building a revolutionary new payment system.

What is iBeacon?

Before we go into the details as to how this new frictionless approach to payment will work, let me explain what its component pieces are. In very simple terms, iBeacon is an indoor positioning system. While your GPS can identify where you are, its accuracy can be limited by a number of factors when you are indoors. So iBeacons are little radio transmitters that use very small amounts of electricity and can send information to a smartphone. The technology leverages improvements in the Bluetooth standards called Bluetooth Light Energy and is available on every iOS device since the iPhone 4S and every Android phone that supports Bluetooth 4.0 and Android OS 4.3 or later (that means that popular devices like the Samsung Galaxy S III and 4, the Nexus 4 or later, HTC One, and Droid DNA all support it).

When a consumer gets close to an i-beacon enabled location, information can be pushed to their device via push messages and the consumer’s location is made available to the retailer, which can allow for such uses as in-store specific promotions, payment capabilities, or other use cases to be defined.

This week, Apple rolled out the technology in all its stores, allowing consumers to skip the register lane and purchase items directly from their device when shopping for holiday items. With over half a billion users on iTunes, the company has a large trove of credit card numbers from most returning members, allowing it to optimize the payment process for existing customers.

How they rolled it out is an interesting case in widespread enablement of new technology. In order to understand how such a rollout can happen as quickly and quietly, one needs to delve into the technical specification for what is happening. And that’s where things get interesting.

The following line was buried into the iBeacon specifications for developers (emphasis is mine):

The Objective-C interfaces of this framework allow you to do the following:

  • Scan for Bluetooth accessories and connect and disconnect to ones you find
  • Vend services from your app, turning the iOS device into a peripheral for other Bluetooth devices
  • Broadcast iBeacon information from the iOS device

Objective-C is the programming language that is used by iOS developers. Vending service from an app made it clear that the technology is not just about location. But the last line about broadcasting iBeacon information from the iOS device is the important clue here. With this, Apple has essentially declared that every iPhone, iPod Touch, or iPad sold since the iPhone 4S can turn into an iBeacon.

In a world where an increasing amount of point of sales systems are being replaced by iPads, this is a revolution in the making as Apple can now have local broadcasting in every store using an iPad. But how does that tie to payments?

The iWallet

In order to understand how Apple can do payments quickly, one has to take a step back and think of where Apple stores information: for most users, that information is store in iTunes, gathered the first time they paid 99 cents for a music track, TV episode, movie, or app. That information has quietly added up to around half a billion different customer, a trove of data only match by Amazon in terms of sheer size.

Earlier this year, with the rollout of iCloud Keychain on mobile devices, Apple started taking information from iTunes and mac computers down to their users’ devices. With loyalty cards increasingly being stored into Passbook and credit card information being stored in iCloud Keychain, Apple has been downloading the content of consumer wallet’s onto its devices, leaving only cash as the use case it does not directly compete with.

To secure all this, the company has been leveraging the one unique thing that customers will always have with them when they’re using their phone: their fingers. With the release of the iPhone 5S, the Cupertino giant unveiled the touchID system, which allows users to use their fingerprint to unlock a device. While Apple ensures users that the fingerprint is not stored on the device, it is clear that it has found ways to uniquely tie that fingerprint to information on the device, creating a secure key based on a user’s unique pattern. The net result is that, when combined with iCloud Keychain and Passbook, an iOS device sporting touchID is now more secure than a physical wallet.

And since any iOS phone can turn into a point of sale, the last thing remaining in your wallet can easily be replaced as Apple has turned your phone into both an outbound payment system and a system that could potentially receive money too, with no physical credit card being required.

Winning through reduced friction

With such a large opportunity, it’s obvious that competitors are bound to emerge and they have. Square, for example, was an early mover, offering a credit card scanner that attached to smartphones. Meanwhile Google has been trying to push the Google Wallet, a piece of software that was supposed to replace user’s wallets but has been increasingly moving away from the space; the large mobile companies (Verizon, AT&T, Sprint) have banded together to back ISIS, a solution that is supposed to allow users to pay with their phones.

While Square has focused on leveraging the existing payment systems (credit or debit cards), all the other players have had difficulties trying to enable a frictionless approach to mobile payment. Google wallet required widespread deployment of NFC on both registers and devices, a solution that was too expensive for more businesses; the ISIS conglomerate found its different stakeholders arguing over control, bringing its product to the market in a fashion that requires not only specialized hardware on the device but also a trip to the wireless carrier store to replace one’s SIM card in order to enable the capability.

The net result is that those previous efforts have largely failed because they required too much work on the part of the user. By building the components one by one and providing value around each of them, Apple may be close to assembling a complete payment revolution.

In the past few years, it has gotten user’s payment information by asking them to enter it into ITunes when they bought an app, a video, or some music. This allowed the company to build up a large database of payment details. Then it added iBeacon to every new device it rolled out without asking anything of its users. Later, it pushed users to share their payment information on the device through iCloud keychain by saying it would also synchronize passwords in the process; and finally, it unveiled fingerprinting as an easier way to unlock the iPhone.

Every step of the way, the company focused on reducing friction and providing increased value for the user when its competitors asked the users to do more work. The net result is that users have voluntarily provided all the components Apple now needs to enable a payment revolution. And we’re about to witness the rise of the iWallet, maybe not this year but pretty soon.


is the founder and CEO of Keepskor and writes, where this was initially posted under the title iBeacon Opens Door for iWallet. You can follow Tristan on Twitter at @TNLNYC

]]> 0
Bitcoin At Crossroad Sun, 01 Dec 2013 22:00:15 +0000

The original address for this post is Bitcoin At Crossroad. If you're reading it on another site, please stop by and visit.

A new digital currency emerges: can it survive?

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Bitcoin At Crossroad. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is Bitcoin At Crossroad. If you're reading it on another site, please stop by and visit.

Bitcoin, the peer-to-peer currency with no central bank, based on digital tokens with no intrinsic value, appears to be headed where no other virtual currencies has gone to date. Its lack of centralization mirror that which has defined the internet to date but also represent new challenges for regulators.

First mentioned in a research paper written under a pseudonym 5 years ago Bitcoin has, in the last year, moved from the domain of crypto-geeks to a more mainstream type of adoption. Today, companies are offering specialized computers and chips to “mine” bitcoins, and many middlemen have emerged to help turn it from an interesting math problem into a useful currency that can be used to purchase items or make donations. A new ecosystem has arisen around the currency with both good and bad actors.

This has forced governments and regulators to scramble and define how the new currency fit into the global regulatory framework. In March, the Treasury’s Financial Crimes Enforcement Network (FinCEN) issued regulations that would require people mining bitcoins for financial gains to register with the government. A couple of month later, the Kenyan government gave its approval to a project linking bitcoin to M-PESA, a popular mobile currency in that country. By August, the German government had essentially recognized bitcoin as a real currency sitting alongside the euro and the dollar.

And by late November, the US Senate was holding hearings on the currency that were described as a “lovefest” and US federal reserve chairman Ben Bernanke gave the currency some extra legitimacy by mentioning its potential as an alternative currency. A recent note by the Chicago Federal Reserve concluded that Bitcoin “represents a remarkable conceptual and technical achievement, which may well be used by existing financial institutions (which could issue their own bitcoins) or even by governments themselves,” a rallying cry to all central bankers that the currency could be a very strong emerging standard that needs to be regulated.

All this has added up to Bitcoin becoming a new hope in the long road to creating a virtual currency. That road is littered with the corpses of many previous efforts, dating back to the early days of the commercial internet: Digicash, Ecash, Beenz, Flooz, Linden Dollars, and Qcoins were all heralded once as the new currency to be born out of the internet and unattached to any governments. Digicash ended up being acquired by Ecash, which itself was acquired after the dotcom crash; Flooz collapsed after the FBI found it was used as a tool for credit card fraud; Beenz turned itself into a reward scheme; the Linden dollar met with the same kind of enthusiasm as bitcoin did but didn’t get widespread acceptance beyond its initial base of Second Life; and the Qcoins set China on fire, even leading the Chinese central authorities to complain about its impact on the Yuan.

Each of these case is instructive about the roadmap for bitcoin.

Regulatory Issues

Over the next few months, expect a lot more legal activity around bitcoins. The currency has already received some negative press due to criminal activities being performed with it (in truth, all successful technologies have good and bad uses so one may consider this as another validation point): In 2011, security firm Symantec noted that criminals were using computer trojans to create botnets that could use a victim’s computer to mine bitcoins; Earlier this year, US authorities seized assets from Mt. Gox, the leading bitcoin exchange, arguing it was being used for money laundering; In October, Silkroad, a sort of ebay for drugs and other illegal goods was shut down, throwing a pal on Bitcoin as it was the currency of choice for the marketplace. These have led prominent politicians like New York state senator Chuck Schumer to look at bitcoin as a money laundering tools while some senatorial candidates now take bitcoin donations.

So the future of bitcoin may be defined by how well the regulatory frameworks adopt it. With legitimate uses behind it, there is room for the currency to succeed but remember that legitimate uses existed behind every other previous efforts. To meet a better fate than previous cryptocurrencies, the bitcoin communities may have to make certain concession around tracking or identifying bitcoin activity, loosening some of the anonymity that has been a defining element of the currency and potentially making it more like a credit card or paypal type of transaction.

Today, a large part of the financial system is based on understanding and reporting how money flows, with different types of reporting and identification requirements based on the size of transactions.For example, US financial institutions are required to file a Currency Transaction Report (CTR) for every transaction of over $10,000 (at today’s bitcoin rate, it would mean somewhere around 10 bitcoins); donations to politicians or non-profit organizations need to be accounted for for tax purpose so every individual or organizations willing to accept bitcoins as donations must have a way to track the source and size of a transaction; companies that set themselves up as exchanges for bitcoins are treated as money transmitters (or money transfer operators) and fall under other sets of regulations. In order for bitcoin to survive, each element in the distribution chain will have to agree to moving closer to the regulatory framework.

Then, there is the question of threat: when the Qcoin started gaining traction as a mean of transaction, the chinese government started applying pressure on Tencent, the company behind the Qcoin. But as bitcoin is decentralized and does not have an individual corporate owner, expect a fair amount of hang wringing from regulators who generally see threat in the things they cannot control.

Technical challenges

The technology that has made bitcoin successful to date may also represent issues down the road. At the current time, over half of all the bitcoins that could ever be in circulation have been mined, due to a hard limit set into the mathematical algorithm controlling its distribution and limiting the total number of bitcoins to 21 million coins in circulation. That feature has lead to an inflationary bubble in bitcoin to dollar trading, with prices fluctuating largely from one day to the next. As fewer and fewer bitcoins become available through mining, new entrants in the market may be limited, making it more difficult for the currency to increase its overall float.

Today, bitcoin chains keep track of all financial activity in a block. As the currency changes hands, that register gets longer and longer and will require some changes to the protocol in order to go beyond a certain point. And more problems exist from the fact that no compensation exists for people who relay transactions, an important part of what makes the network successful. The increasing cost of relaying such transaction may end up making the currency to expensive to manage outside of a small core group of users. To date, the protocol has been edited several times, with agreement from most members of the community but as the currency grows larger, getting to agreements on changes may take substantially longer.

As the currency is gaining interest, it is already getting more attractive to criminals, who will look for ways to exploit security flaws in order to steal bitcoins. While the algorithm has not been cracked yet, expect increased attempts to find ways to steal or counterfeit bitcoins to emerge over the next few months.

Societal Acceptance

And beyond the technical and regulatory issues is the societal challenge of getting acceptance from the general population. On that end, there is hope as many recent changes have made such currencies more acceptable. The success of the Qcoin in China has led to increased acceptance of bitcoin. In Europe, the generational move to the Euro a decade has gotten individuals to become more attuned to the idea of using a different currency and some seeing the new currency as a great alternative the Euro.

For bitcoin to gain more widespread acceptance, advocates will have to make the case that it is as safe to use as any other form of payment, bringing a message to the masses that this is a legitimate and valid currency that can be used in the same fashion as dollars, euros, or yuans.

To date, the growth of bitcoin has been extremely impressive but the currency now sits at a difficult crossroad: on one side, its backers may have to make concessions to established governments and financial operators, abandoning some of the core features that have made it successful to date (anonymity, artificial limits, etc…); on the other, the community may decide that it will not compromise the core philosophy of the currency, setting itself up for an uphill fight against established actors. As a result, the next 12 months will be ones of transition for the cryptocurrency, defining whether it will be sitting alongside debit or credit as a valid form of payment next holiday season or whether it will be seen as another historical footnote in digital currency history.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Bitcoin At Crossroad. You can follow Tristan on Twitter at @TNLNYC

]]> 0
Cable Not TV Sun, 24 Nov 2013 18:49:39 +0000

The original address for this post is Cable Not TV. If you're reading it on another site, please stop by and visit.

Cable TV is more about the cable than it is about the TV

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Cable Not TV. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is Cable Not TV. If you're reading it on another site, please stop by and visit.

As rumors of mergers and acquisitions are increasing in the cable industry, with the recent news that Time Warner Cable may be acquired by either Comcast or Charter, one thing is becoming increasingly clear: cable TV is more about the cable than it is about the TV.

Broadband at the core

What started as a trickle a few years ago is increasingly looking like a steady flow away from television subscription. In the 3rd quarter of 2013, 113,000 people switched off their cable or satellite TV service. The number may be small but when compared to the 80,000 people who had turned off similar service over the previous year, it seems to represent a substantial acceleration in the process. Yet, the cable providers do not seem to be overly worried as this loss of customers doesn’t seem to translate into any direct impact to their bottom line.

While TV subscriptions were being turned off, cable and telcos added over half a million new high speed internet customers in the last quarter. While Comcast (18 million subscribers), AT&T (16 million), and Time Warner Cable (10 million) sit at the top of the industry, the vast majority of other providers in the space have under 5 million high speed internet subscribers, creating a space that is rife for consolidation.

One of the chief proponent of this consolidation, John Malone, is a cable industry veteran who built TCI, a cable giant that was bought by Charter Communications and Comcast. He is now looking to use Charter, the 6th largest provider of high speed internet service in the US (3.6 million subscribers), to launch that consolidation.

One may wonder how the 6th largest company would look to jump several places by acquiring the number 3 in the space. The answer comes down to geographic concentration. According to the national broadband map, the areas Charter and Time Warner Cable occupy are physically close without being directly competitive, which could simplify physical integration and upgrades. The technology stack they use are similar and both in need of an upgrade if they want to compete for the next generation of subscribers.

And then comes the question of bandwidth regulation. Time-Warner is currently tied to a model where unlimited bandwidth was provided to users, a descendant of the all-you-can-eat offering that was created around internet access in the 1990s. As Charter came to the market later, it charged on a different model, providing some caps on the amount of bandwidth a user can eat up. These small changes allow the company to charge more for the same amount of bandwidth: in a world where Netflix, YouTube, and others provide video services that eat up large portions of internet traffic, that small difference can translate into richer revenue for a combined entity.

TV channels are apps

But why are people leaving TV for the internet? Is Netflix the only company that can be blamed for this? And is it just a temporary? To understand this, it is important to talk to upcoming generation. For example, kids under 12 have less of an understanding of TV channels than they do of TV shows: in a world where Netflix provides a kid-friendly interface, the focus is on the shows and the characters and the offerings are always on-demand. This will present a generational challenge in a decade or so as those individuals make buying decisions on cable and internet service.

Combine this with the increasing offerings from individual channels as apps and you can easily see where this is going: TV is increasingly getting unbundled, as channels are now only loosely associated to the underlying cable package. Aereo, for example, has been bundling live over-the-air TV into a convenient app; Netflix, Hulu, Apple and Amazon have arisen as aggregators of content that are now starting to produce some of their own offerings; HBO has created a successful app that provides its shows on-demand; and increasingly, TV station are debuting new shows in mobile apps. Add all those pieces together and the pattern that emerges is that TV channels are increasingly becoming apps. Those apps can then either be bundled and sold as packages as cable TV operators have done, or a la carte, over any internet provider.

Where is the box?

While this may all be fine when it comes to looking at a show on a tablet or a smartphone, people may feel that the experience of a large screen is substantially better, which has brought forth a number of offerings: today, you can watch Netflix or Hulu either through smart television or through boxes that attach to you TV. Those boxes from providers like Apple (AppleTV) and Roku, and enhanced TV allow you to get access to a limited set of offering but may not fully present what the future has to offer because they require that individual channel create a separate app for each device. In all of this, Google has taken a more intriguing approach in that it now sells Chromecast, a small device you attach to your TV’s HDMI port, for $35. The device in itself doesn’t do much but when it is paired with an app on your smartphone, it can turn magical as it allows you to use your smartphone app essentially as a remote for your TV, giving you access to all the content that exists in the app but sending the video stream to the TV screen. This reductionist approach means that any app provider can easily add functionality to their app and not worry about the type of TV on which it will run. And considering the price at which the device sells, we might see the company license the technology to embed it directly into TVs.

Once you remove the set-top box from your TV and turn every channel into an app, the only thing the cable companies have to offer is those big pipes that come into your house. And consolidation of those pipes can make a big difference when negotiating with equipment providers or trying to eek out profit from customers. Viewed through this lense, it becomes increasingly clear that the cable consolidation has little to do with TV and everything to do with  cable.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Cable Not TV. You can follow Tristan on Twitter at @TNLNYC

]]> 0
Putting Snapchat in context Sun, 17 Nov 2013 22:00:45 +0000

The original address for this post is Putting Snapchat in context. If you're reading it on another site, please stop by and visit.

Was the company right to turn down $3B from Facebook?

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Putting Snapchat in context. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is Putting Snapchat in context. If you're reading it on another site, please stop by and visit.


Snapchat shows that people are getting concerned about privacy

The internet industry has been abuzz with the rumor that snapchat, the hot app that allows anyone to take pictures and videos that expire after a limited time period, rebuffed a $3 billion offer for acquisition by Facebook.

While billion dollar offers in the consumer internet space are relatively rare, we can still look the roughly 20 deals that have been put forward into that space in order to get a sense of what those kinds of valuations can entail. This allows us to put Snapchat in a wider context and get a better understanding as to their decision.

But first, let’s take a look at the list of consumer internet companies that have been rumored or completed acquisitions in the greater than $1 billion range:


Company Offering party Price Result Year company founded Year offered was made Current value (in billion)
Facebook Yahoo $1B Declined 2004 2006 $120B
Instagram Facebook $1B Acquired 2010 2012
Tumblr Yahoo $1.1B Acquired 2007 2013
Mapquest AOL $1.1B Acquired 1996 2000
Waze Google $1.1B Acquired 2008 2013
Paypal Ebay $1.5B Acquired 1998 2002
YouTube Google $1.6B Acquired 2005 2006
Facebook Viacom $2B Declined 2004 2007 $120B
Skype Ebay $2.4B Acquired 2003 2005 Resold for $2.75B
Rovio Zynga $2.5B Declined 2005 2011 $9B (rumored)
GroupOn Yahoo $3.6B Declined 2008 2010 $7.5B
Snapchat Facebook $3B Declined 2011 2013 $4B (rumored)
Geocities Yahoo $3.6B Acquired 1996 1999 $0 (Dead)
Netscape AOL $4.2B Acquired 1994 1998 $0 (Dead) Yahoo $5.7B Acquired 1995 1999 $0 (Dead)
Groupon Google $6B Declined 2008 2010 $7.5B
Twitter Google $8B Declined 2006 2010 $24B
Skype Microsoft $8.5B Acquired 2003 2011 In 2009, the company had been valued at $2.75B
Twitter Facebook $10B Declined 2006 2011 $24B
Facebook Google $15B Declined 2004 2008 $120B
Yahoo Microsoft $44B Declined 1994 2008 $36B

Looking at this graph, the first thing that becomes clear is that Snapchat’s chances of getting other offers are getting limited. This points to the company having a strong belief that it can be the social network of the future and trump the likes of Twitter and Facebook at their own games. At this high a valuation, the number of potential acquirers becomes limited: Facebook may still try and Google and Microsoft would be among the few other US based companies which could play here. Tencent and Alibaba seem like the only other potential suitors for an acquisition on a global basis.

At this point, the company is 2 years old. By way of comparison, Facebook was 2 years old when Yahoo attempted to acquired, as was Instagram when Facebook picked it up. Skype was 2 too when Ebay bought it for $2.75B and GroupOn rebuffed a $3.6 billion offer from Yahoo and a $6 billion one from Google in its second year of business; So one could argue that snapchat has a lot of upside potential as it is still a relatively new business that is being valued at a much higher rate than other companies at this point in its evolution. Whether that translates into sustained growth over longer period is yet to be debated but it seems clear that we will hear about another offer at a higher valuation for the company within the next 12 months.

Their path to success, however, becomes a little more complicated. If the company wants to offer liquidity to its investors, it will have to find a way to either increase the value at which it is acquired or start generating substantial revenue and complete a public offering. If it were to chart a course similar to Twitter’s, snapchat would have to essentially triple its worth over the next couple of years and then double it again over the next two. This would set its value at somewhere around $9 billion by 2015 or a potential IPO of around $18 billion by 2017.

However, success on those courses are relatively rare. Witness, for example, Groupon. The company turned down a $6 billion acquisition offer from Google in 2010, two years into its business life; 3 years later, as a public company, it is thought to be worth around $7.5 billion. Meanwhile, acquired darlings of the 1990s like Geocities (acquired by Yahoo for $3.6B in 1999), Netscape (acquired for $4.2B by AOL in 1998), and (acquired for $5.7 billion by Yah00)  all ended up being closed up. And in 2008, Yahoo rebuffed a $44B acquisition from Microsoft; today, its market capitalization sits at around $36B.

Of course, Snapchat may pin its hopes on becoming either Skype, Twitter, or Facebook. Skype was acquired for $8.5B by Microsoft, a price higher than what Microsoft paid for Nokia; Facebook and Twitter have taken their hopes and turned into rich advertising-based public companies with market caps that sit several multiples higher than the acquisition offers they received. Interestingly, Snapchat has a number of similar characteristics in common with those offerings:

  • It’s about communication: Skype, Facebook, and Twitter built their business on facilitating communication between individuals. Snapchat is doing the same by taken a mode of communication that had changed (moving from one on one text and audio chat over Skype to publicly open chats on Facebook and Twitter and taking them back to a more private realm).
  • It’s mobile: Snapchat exists only in the mobile space, a place that is hot. By comparison, with the exception of Instagram and Rovio, all the other companies that ended up in the billion-plus acquisition club had background in the PC space. Because all the giant players (Twitter, Facebook, Google, Microsoft, Yahoo) need a stronger footprint in the space, they may be willing to pay a premium.
  • It taps into some dissatisfaction with the current model: Facebook’s and Twitter default open model to privacy seems to worry an increasing portion of the market. By offering self-destructing messages, snapchat may present a solution to the social media conundrum of “do I share or do I worry about my digital footprint”.

Today, though, Snapchat sits in a very gray area. At $3B, it sits on the high end of potential offers. So your view on whether the company was right to accept the deal or not ties into your expectations as to whether it can become the next Twitter or Facebook. If you believe that’s the case, then the $3 billion valuation was low; otherwise, it is high. And considering recent rumors that Google had offered $4 billion, it’s unclear how far up the sticker price could go.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Putting Snapchat in context. You can follow Tristan on Twitter at @TNLNYC

]]> 0
Product Lessons From Twitter Sun, 10 Nov 2013 18:33:11 +0000

The original address for this post is Product Lessons From Twitter. If you're reading it on another site, please stop by and visit.

This week's successful Twitter IPO provides valuable lessons for product developers

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Product Lessons From Twitter. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is Product Lessons From Twitter. If you're reading it on another site, please stop by and visit.

When Twitter was born, the 140 character limit looked like a capricious attempt at conciseness. In a world where blogs were dominant, Twitter looked like a toy and it was unclear what its prospects would be but as the mobile world grew, so did Twitter. And along with it, a number of valuable lessons for every product manager in the tech world has arisen. In the wake of the company’s successful IPO this week, let’s take a look at how product features helped the company succeed.


At its core, the purest unit of Twitter is a tweet, or a grouping of 140 characters that can be shared. This serves as the nerve center of everything the company does. Any company that wants to succeed needs to find its core atomic unit: for Google, it’s the search box (look at its front page); For Facebook, it’s the user profile; For Instagram and Pinterest, it’s the picture; For LinkedIn, it’s the business card (or business profile); For Foursquare, it’s your location. Startups that want to follow in those companies’ footsteps need to think of what is the most basic component that remains when they strip everything away from their product.

That kind of reduction seems easy but is actually a huge challenge as this core component then needs to be different enough to be able to succeed. The most successful companies are the ones that can find a unit that resonates at the smallest level. The core question anyone should ask at that stage is what can I remove from my idea? The more you remove, the clearer the picture becomes and the more focused the offering can get.

But removing does not mean throwing away. It just means temporarily putting things aside until you can figure out what makes the product successful. Once you cannot remove anything more, you have the basic unit for your product. While Twitter did this before the product was out in the market, we can see similar elaborations in products like the Apple iPod shuffle. When Apple could not progress the iPod by extending what it could do, it went in the reverse direction and figured that, at its core, the Apple had been about the song as a core atomic unit. So it produced a product that could play songs.


Twitter in 2006

The next step is how to make that atomic unit work. The 140 characters Twitter has needs to be posted so a box to enter the data and a button to post it is necessary. And once it is posted, anyone reading it needs to know who the message is from so the Tweet now needs to have a name attached to it. This is an important distinction as people look at Twitter as a social network like Facebook but the reality is that Twitter is centered around the message, not the individual.

But here again, one has to be careful about not overloading the functionality. Successful products should be easy to understand. For example, if you were to take the original Twitter interface and the original Google interface, the whole of the difference in the initial experience was encapsulated in what was on the button next to the text entry box: for Twitter, it was “send” (it was eventually changed to “update” and then “Tweet”), clearly highlighting that the utility was to send the text entered into the box onto the internet; for Google, the same button had “Search” (which has morphed into “Google Search” over the years), making it clear that the utility was to find something that had been entered in the box.

The same can be true in hardware: the initial iPod shuffle had a button to start and play music, skip tracks, and change volume, reducing functionality to its bare essential.

So when you’re thinking past the atomic unit you’re creating, you have to think about what that minimum group of functionality you need to add to it in order to make it work.


Twitter in 2007

Once that core unit has been define, the next challenge is figuring out how to add utility to it. For Twitter, it started with the ability to reply to a Tweet and “Favorite” a tweet; then it was expanded to the concept of retweeting (or sharing a tweet). Over time, they layered in more functionality but always with the concept of enhance the core unit.

Because Twitter was about communication among friends, the concept of following and followers was born. Here again, the distinction with a social network arises: While Facebook wanted to make sure that the relationship between two people was agreed upon, requiring from both party for confirmation, Twitter looked at those relationships as much looser, similar to one being a fan and thus did not require an agreement between all parties.

Note that these new functionality steps were not in the initial package, they were added progressively. The reason for taking that approach is two-fold: first, it ensures that your customers are not overwhelmed with new functionality on day one, making sure that the product is easy to use and comprehend. Secondly, it gives developers time to think through the different portions of interaction and develop them based on demand.


Twitter in 2010

As the product evolved, the company had to think about how to improve the experience. This meant balancing the need to remain true to its core and the wants of the users. In 2007, developer Chris Messina introduced a convention from an earlier internet chat system (IRC) on Twitter when he asked fellow users if they would be interested in using a # before some text to make it easier to search through tweet. Thus the hashtag, now thought to be an essential feature of Twitter, was born. It remained a geeky thing for a couple of years before the company started to incorporate it in its design. A 2009 redesign brought popular hashtags into the navigation bar but it wasn’t until 2010 that they became clickable within the discussion flow.

This kind of listening to the customers and assessing what should and should not go into the product makes the core of product management focus and the slow cautious approach to the hashtag integration shows how even a quick-moving company like Twitter can take a long time to make radical changes to its product offering. Here, the lesson for product managers may be that small groups of vocal users may be right but requirements should be dictated by a critical mass of users.


Twitter in 2013

There comes a point in every product life where the core unit of a product may require expansion. If you remember, a few weeks ago, we talked about how Twitter cards allowed the company to expand beyond the 140 characters limit. to recap, cards are extra information carried in a message when it includes a URL. That extra information can then be presented by the Twitter interface to offer a richer experience that may include pictures, videos, or more.

Once again, we see a company that is deliberate in its approach to growing its offering. Looking back at its core unit, it has figured a way to wrap it but not change it, retaining the original purity while extending its functionality.

When Twitter first came out, its core unit was aimed at text messaging. At the time, MMS (the version of SMS that allows you to attach pictures) was already common but Twitter decided to stay close to the text medium (that core unit). The original product could have easily incorporated images but it did not. Again, that essential purity gave it its start but the fact that extra content could be loaded at the time highlighted a potential path.

Twitter could have added that extra functionality in the tweet but decided to put it one layer above it when it implemented the new functionality: this is a smart move as it keeps its core intact. The lesson for product manager is that the core should not be changed if it has been designed properly.


Twitter provides some very valuable lessons into how a product ought to be built. Its success in the marketplace and among consumers shows that small unit, properly expanded, are the best way to go when you want to build a service that will work at scale. Product managers would be served right to visit their own product and figure out what that core unit is. And if they cannot find it, they may have to deal with a large issue as to the future prospects of their product.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Product Lessons From Twitter. You can follow Tristan on Twitter at @TNLNYC

]]> 0
Software prices went to 0 Sun, 03 Nov 2013 22:00:53 +0000

The original address for this post is Software prices went to 0. If you're reading it on another site, please stop by and visit.

Apple may just be joining the bandwagon

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Software prices went to 0. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is Software prices went to 0. If you're reading it on another site, please stop by and visit.

Last month, Apple announced that it would move many of its popular software packages to a new price tag: free. From operating system upgrades to productivity suite, the company slashed prices to $0, creating new challenges for Microsoft.

But why would they do such a thing? Was the goal to kill off Microsoft or was the company just joining a bandwagon that was already rolling?

Over the past few years, the consumer software world has been going through a software revolution, with new pricing models emerging over time. Prior to the 2000s, consumer software was either sold in a box (retail software), provided on a trial basis with the ability to pay later or pay for extra features (shareware), or completely free (freeware).

The advent of the internet allowed for new models to emerge: With software being distributed either as web applications or through models that required a connection to the internet, software gained the ability to be distributed on a subscription basis or subsidized through advertising. Over the past few years, application software has increasingly moved to an initial price point that edges closer to zero on a consistent basis. Witness recent changes:

  • Media organizers like iTunes are available for free, subsidized by revenue made from selling music, TV, and movies in their related stores
  • Media players like Pandora or Spotify are offered either for free, subsidized by advertising, or on a subscription-basis, which removes the ads.
  • Productivity suites (Word processing, spreadsheets, presentation software) are either available on a subscription basis (Google Apps, Office 365) or thrown in when buying a new computer (iWork)
  • Adobe offers its image manipulation software on a subscription basis while Apple offers its iLife suite free with purchase of hardware.
  • The rise of Free to Play as a leading category in the gaming world, putting pressure on pricing of game titles for computers.

Industry insiders point to the offerings on mobile devices having a large impact on consumer expectations regarding software. “The rise of online applications and apps for tablets and smartphones has given rise to an expectation on the part of consumers that software is free,” says industry analyst and Accurra Media Group CEO Jonathan Spira. “As major companies such as Google and then Microsoft have moved more in this direction, and given Apple’s very recent announcement that it will supply its productivity applications at no charge with new Macs and MacBooks, software companies have had to find other sources of revenue, including advertising, in order to be able to offer a quality product to their customers.”

With Apple’s iOS and Google’s Android operating system offered freely, it was only a question of time before all operating system offerings went to this model. Apple’s recent price drop on OSX is the continuation of a trend that started with Linux operating systems, became stronger with the rise of mobile devices.

For years, the price of Windows operating systems was hidden from consumers as it was bundled with the computer but with fewer people upgrading their hardware, the company’s pricing strategy on their Windows offering has become a weakness in their operating systems offerings. But their other monopoly, Microsoft Office, seemed relatively safe until the appearance of Google Apps, a suite of web-based tools that offered similar functionality. ”Google Apps is the free sword of Damocles ready to behead the paid Microsoft Office hydra” says silicon valley insider Jonathan Hirshorn, who has worked with Apple and other companies for over 2 decades. “The future is not pay-once, it’s free with targeted paid upgrades on a regular basis,” he adds, pointing out that users are happy to pay a premium or upgrade price if they already derive some utility from the base free product.

But these new models also represent a new set of pressures on software manufacturers as the cost of software production may not be decreasing as quickly as the the upfront pricing of the software. Take, for example, video games. Today, the economics of the gaming industry go across the board between two extremes: AAA titles, with production costs often soaring north of $100 million and price tags in the $60-100 range, and free to play title, with more modest ambition and a revenue model that includes selling virtual goods or virtual currency to unlock higher levels. “Free to play gaming is like television while AAA titles are like movies” says gaming industry veteran Greg Costikyan, who currently works as the senior designer for Loot Drop. “There is room for both although it is notable that TV is much bigger than film (in terms of dollar gross), and I expect FTP will be much bigger than fixed-price games.”

Other people at large triple-AAA producers see the same thing happening, admitting off record that the era of DLC (downloadable content that is offered as expansion content on top of an existing package) will drive future growth for top-tier companies. “We may be seeing the end of titles being able to fetch $80 on their first day, and we will all need to adapt to that new model,” confided a senior executive in the gaming industry to me.

With tablets and mobile phones becoming the dominant form of computing, consumers have gotten used to lower pricing on software, where most apps are either free or cost under $5. Apple has witnessed that trend first hand and is now taking it the remaining platforms on which it offers software, as it can continue to derive revenue from the sale of hardware. Meanwhile, Google has long looked at advertising as its primary source of revenue so the idea of charging software is one that was never baked into its own DNA. But Microsoft will have to radically change its own DNA if it wants to succeed in this new world: the company practically invented the idea of charging for software and now finds itself in a world where its idea is no longer relevant.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Software prices went to 0. You can follow Tristan on Twitter at @TNLNYC

]]> 0
RIP Mobile Minutes Sun, 27 Oct 2013 22:00:56 +0000

The original address for this post is RIP Mobile Minutes. If you're reading it on another site, please stop by and visit.

Major changes are underway in the mobile space

is the founder and CEO of Keepskor and writes, where this was initially posted under the title RIP Mobile Minutes. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is RIP Mobile Minutes. If you're reading it on another site, please stop by and visit.

With the growth in data usage far outpacing that of voice traffic on mobile networks, wireless providers are quickly abandoning the model of selling baskets of minutes to individual users. This week, AT&T stopped offering minute related plans for smartphone users, following in the footsteps of T-mobile, Sprint, and Verizon, which all ditched the minutes model for smartphones over the past year.

Starting at the dawn of the mobile eras, mobile minutes have long been the measuring stick of the cellular world but the introduction of smartphones in the late 2000s radically changed wireless usage profiles. Since 2008, the growth in voice revenue have been consistently slowing down while data revenues have gone on an explosive growth curve. ”Data is now the only way they can make money” said Tim Bajarin, analyst at Creative Strategies.

The move mirrors what happened in the landline business. Over the last few years, data networks and companies taking advantage of them have allowed for telecommunication prices to drop sharply.Voice Over IP, a technology that allowed operators to route telephone calls over the internet, first put some pressure on land lines pricing, eventually leading all the major providers to move away from charging long distance fees on any calls within the country to move to a flat pricing model, with unlimited minutes becoming the default standard.

With the rise of smartphones, a lot of the technology that drove the pricing down on landline eventually made it to the mobile wireless space, with smartphones essentially functioning as computers do, giving them the ability to carry applications like Skype and others to communicate. ”The whole industry is changing quickly with Skype and wireless carriers offering unlimited plans hoping you won’t actually use them” said telecommunication pioneer Alex Mashinsky, who founded Arbinet, an early VOIP operator that was acquired by Primus, in the 1990s. “With consumers no longer looking at minutes, carriers should generate more revenues from the data plans, which are limited.” And as new technology like Voice Over LTE (VOLTE), are implemented, all network traffic will become data traffic, allowing carriers to free up scarce resources to put more people on their existing networks.

“But data is more expensive in terms of bandwidth capital cost for setting up networks so while they save on voice they still have real cost for enhanced data networks,” points out Bajarin. As a result, the wireless operators could find some pressure on their margins. “All-you-can-eat data plans from competitors will keep a check on raising data plan prices from other carriers” meaning that there is little room for raising prices while the cost of providing service is growing.

The operators will have to look for creative ways to increase revenue, and those gains may come from outside the US market. “The US network biz models are out of kilter to the UK says European mobile analyst Ewan Spence. “The UK has far more pay-as-you-go input, which tends to be more minutes focused,” a model that could remain more appealing to large wireless operators. This difference may explain rumors of a Vodafone takeover by AT&T swirling around the industry: Vodafone recently left the North American market when it sold its stake into Verizon Wireless and AT&T needs to find new ways to grow now that its hold on Apple hardware has been loosened by fast-moving T-mobile. Along the way, it may also represent new challenges for companies like Sprint and T-mobile, which have aimed to differentiate themselves by offering unlimited plans.

Whatever happens in this phase of the telecommunication revolution, it appears that consumers will be on the winning end as the era of complex mobile wireless plans seems to be coming to an end, with gigabyte now becoming the new measuring stick of wireless costs. And this may just be the beginning of more radical changes, as new players like FreedomPop are now looking to drive the cost of entry plans to zero, a move that could be the seed of the next round of disruption in the industry.



is the founder and CEO of Keepskor and writes, where this was initially posted under the title RIP Mobile Minutes. You can follow Tristan on Twitter at @TNLNYC

]]> 0
The wearable computing conundrum Sun, 20 Oct 2013 22:00:48 +0000

The original address for this post is The wearable computing conundrum. If you're reading it on another site, please stop by and visit.

Smart watches and Google glass are supposed to be the future but they may have a hard time dealing with social convention.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title The wearable computing conundrum. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is The wearable computing conundrum. If you're reading it on another site, please stop by and visit.

Google Glass

Dorky glasses or revolutionary new computer?

For the past few months, I’ve been testing out the Pebble smartwatch and talking to people who were in the test group for Google Glass smart glasses and one thing is becoming increasingly clear: We’re not ready for ubiquitous computers. As much as backers tout the benefits, wearable computers flunk too many rules of proper etiquette. They need to be normative to be popular and right now, they’re not.

Even Vint Cerf, the internet pioneer and Google employee, sees a problem. “Our social conventions have not kept up with the technology,” Cerf said, speaking at the Future in Review conference earlier this yet. Cerf, an early supporter of Google Glass, believes that social conventions will evolve, but the challenge may be substantially deeper as it relates to social norms established for centuries.

It basically means that you’re going to be an asshole, and that it’s easier and easier to ignore people around you,” said Scott Heiferman, founder of about Google Glass, adding that he plans to at some point “punch someone in the face wearing Google glasses.” And there are so many negative social responses to Google glass that the term “glasshole” has come to represent anyone wearing the device.

At The Next Web conference in March, tech columnist Robert Scoble, one of the most vocal supporters of Google Glass, came down on the side of their paparazzi-like power to shred whatever privacy we have left: “I could take a 1600mm lens with camera and shoot you from across the street, this is what happens to celebrities all the time and it can’t be stopped.”

Pebble Watch

The Pebble Watch

“This is not going to be a Google only problem,” adds Scoble, pointing out that other vendors are moving into the space. Upon seeing that I was wearing a Pebble smart watch, a New York tech executive recently told me he was now leaving his at home. “People thought I was being rude and checking the time constantly when I was really monitoring incoming messages. It sent the wrong signal,” he said.

Therein lies the wearables conundrum. You put a phone away and choose not to use it, or you can turn to it with permission if you’re so inclined. Wearables provide no opportunity for pause, as their interruptions tend to be fairly continuous, and the interaction is more physical (an averted glance or a vibration directly on your arm). It’s nearly impossible to train yourself to avoid the reflex-like response of interacting. By comparison, a cell phone is away (in your pocket, on a table) and has to be reached for.

Wearables don’t have the benefit of being normative behavior. We’ve used to being interrupted by previous communication technologies. In Europe, it is common for someone to excuse themselves and leave a conversation if he/she receives a phone call or has to use a smartphone. Proponents of wearable devices will have to solve this user experience problem.

Even if they don’t, the technology still has substantial potential in situations where an incoming data stream favors the wearer without imposing on the people around him. For example, surgeons could have access to a patient’s data during an operation, with the patient’s vital signs in their field of view as they operate. Repairmen could have access to information while fixing airplanes, cars, or other devices; journalists could wear those lighter cameras to document events without having to carry heavier equipment; people exercising could receive information about their run or bike ride from the device. This has become an important use case for the Pebble, which synchronizes data with several health-related applications.

But to become a significant consumer product, wearables are going to need the help of the general public in redefining social norms around their usage. Easier said than done.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title The wearable computing conundrum. You can follow Tristan on Twitter at @TNLNYC

]]> 0
The Twitter Platform Sun, 13 Oct 2013 22:00:47 +0000

The original address for this post is The Twitter Platform. If you're reading it on another site, please stop by and visit.

Could Twitter succeed where Facebook failed?

is the founder and CEO of Keepskor and writes, where this was initially posted under the title The Twitter Platform. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is The Twitter Platform. If you're reading it on another site, please stop by and visit.

As Twitter grows beyond its 140 characters limitation and prepares for an IPO, we could be witnessing the rise of a platform that succeeds where Facebook has failed.

You may remember the Facebook platform, launched in May 2007. “Right now, social networks are closed platforms, and today, we’re going to end that” proclaimed Mark Zuckerberg at the platform’s unveiling, which allowed external developers to build applications on top of the data Facebook had acquired abouts its users (Facebook calls this data “the social graph,” which has led many other startups to talk about how they owned “the XX graph” for their vertical.)

The euphoria was palpable as companies like Zynga and BuddyMedia leveraged the opportunity, acquiring millions of new users at relatively low costs. But then, Facebook started restricting what developers could do and slowly, the platform became less and less attractive to developers, with mobile replacing Facebook as the hot new thing.

According to figures from Social Bakers, about 33 per cent of countries on Facebook saw a decline in monthly active users over the last six months, compared to about 11 per cent over the last year. Global Web Index reported earlier this year that Twitter was now the fastests growing network, experiencing a 40% user growth rate in the last half of 2012 while Facebook was only experiencing a 35% one. And with many in the media talking about how the Facebook platform is in decline with developers (eg. ForbesWall Street JournalAdAgePandoDaily, etc…), a new opportunity has arisen and Twitter has quietly assembled all the pieces.

Quietly moving to a platform

First came the ability to embed tweets in other websites, an innocuous move that seemed to present no threat to a large platform like Facebook. Then came the follow button, which could be seen as the equivalent of Facebook’s Like button. This was followed by opening up the ability to leverage Twitter’s authentication framework, essentially providing login and user management services, to every developers who wanted to leverage it.

Of course, behind all this, Twitter has always been relatively generous in offering ways to get access to their data and augment it. Companies like DataSift have built strong businesses providing developers with access to all tweets coupled with extra social and sentiment data. And many startups were built and sold around the idea of providing independent software clients to manage how one managed his/her interaction with Twitter on mobile devices, connected TVs, and beyond. All of sudden, Twitter was everywhere.

On June 29, 2012, Twitter showed how it would build a better version of the Facebook platform. It didn’t come through a massive announcement on a stage in front of a lot of press but in a rather terse 439 words blog post on the company’s site by Michael Sippey, VP of product at the company. The post, entitled “Delivering a consistent Twitter experience,” highlighted Twitter’s ambitions in the application running space.

While much of the focus in the ensuing discussion about this post was relating to the fact that Twitter “developers should not build client apps that mimic or reproduce the mainstream Twitter consumer client experience,” a major portion of the post was dedicated to Twitter cards, with Sippey declaring “we want developers to be able to build applications that run within Tweets.”

Showing one’s cards

So what are Twitter cards?

At their core, Twitter cards are a way to display extra information that goes beyond the 140 characters self-imposed limit Twitter created. For example, here’s what a Twitter cards for last week’s entry looks like:

Twitter cards

As you can see, in this case, a Twitter can include extra content including some images, information about where this was published, who wrote it and so on. Twitter offers different types of cards for apps, video/audio, photos, products, and more.

By allowing for extra information but not extra presentation to be handled, Twitter gets to control the user experience within the context of a card. This is important because it gives Twitter great amounts of control over how the content is being displayed but also allows them to present it in different ways on different platforms. So a developer can push out Twitter cards to Twitter and not worry about how they will be presented on the web, mobile, or TVs are Twitter will take care of that formatting.

And because cards allow for bite-sized content to be augmented, they fit neatly into the Twitter service, which has prided itself on artificial limits as to how much data it would carry. Interestingly enough, Twitter is not the only company pursuing cards as a new user interaction model.

Google does cards with its Google Now product

Recently, Google unveiled Google Now. Google Now is a very pervasive system but what is interesting is that it presents its information as cards (even using the same nomenclature at Twitter) on mobile devices (see picture on left). For Google, the idea is that it can first leverage this information on Android devices, providing a clean and consistent way to deliver little bits of information. Expect cards to be coming soon to the search engine as it provides just the right amount of information around an individual content piece. As more and more digital device arise with smaller and smaller screens (eg. Google Glass, smartwatches), the ability to deliver small chunks of content becomes a substantial design issue and Twitter and Google are at the forefront of a major revolution in user interface design that will stay with us for a very long time.


Look, for example, at how Google is embedding cards into Google Glass, their wearable computing interface.

Google Glass cards

Just as the redesign of iOS7 has been on a flatter design template to focus on the content rather than on the interface, cards represent the logical next step, with most of the interface disappearing and the focus being put mostly on the content.

Twitter (and Google) are betting that this is the future of interface design, a future where minimalism wins over all and the context of the information drives the limited number of interface elements that are displayed. Because the interface disappears, it is the ideal model for any device, from feature phones or smartwatches presented with only a few lines of text to mobile devices, connected TVs or computers where you can use the enhanced data to augment the experience.

Bringing all the pieces together

Twitter has build up a large user population sharing different types of content; It’s also build a large following for its authentication service; It’s created ways for external parties to augment the data it has so as to match what Facebook does with its social graph; The ability to create micro-communities around a certain topic (using hashtags) has been embedded in the core messaging layer; it has location data for a large part of the tweets that are distributed on its network; and now it has a way to present all this on any device large or small.

If you start comparing the Facebook and Twitter platform, you end up with the following:

Feature Facebook Twitter
Core Services
Login and Registration Yes Yes
Geo-localization Yes Yes
Chat Yes Yes
Payments Yes No
Portable UI No Yes (cards)
Social Plugins
Embedding Yes Yes
Activity Feed Yes Yes
Comments Yes Yes
Follow Yes Yes
Share Yes (Share) Yes (Retweet)
Like Yes (Like) Yes (Favorite)
Social Graph Data
Apps Yes (Ad API) Yes (card)
Books Yes No
Fitness Yes No
Music Yes Yes (card)
Movies Yes Yes (card)
Photos (& Galleries) No Yes (card)
Product No Yes (card)

Looking at this chart, it’s pretty clear that Twitter is slowly filling the gap to matching all the capabilities of the Facebook platform. And don’t be surprised if they start unveiling products aimed at gather data around books and fitness or roll out a payment solution.

Where’s the backlash?

The Facebook platform was undone through a series of actions that led to substantial developer backlash. Because the company started from an “as open as we can possibly be” position and then proceeded to ratchet its openness as it figured out what business model it could operate on this platform, developers saw them as being overly greedy. By comparison, Twitter started with a relatively closed system and has progressively opened the door over time, sending out signals that is is getting more and more open over time. This is probably the main reason why we are not seeing the kind of negative press around the Twitter platform that we’ve seen about the Facebook one and it is a critical part of why the company may ultimately succeed where Facebook has apparently failed.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title The Twitter Platform. You can follow Tristan on Twitter at @TNLNYC

]]> 1
Is Twitter ready for an IPO? Sun, 06 Oct 2013 22:00:57 +0000

The original address for this post is Is Twitter ready for an IPO?. If you're reading it on another site, please stop by and visit.

How does Twitter compare to Facebook and LinkedIn?

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Is Twitter ready for an IPO?. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is Is Twitter ready for an IPO?. If you're reading it on another site, please stop by and visit.

This week, Twitter filed made its public offering documents public, giving us the first real glimpse into the company’s revenues and user base. Since users are often mentioned as a significant number in evaluating company, I’ve looked in the past at how they compared to revenue and market valuation. So with Twitter’s public revenue and user base numbers, along with the valuation of $15-20 billion that has been floated as their upcoming market capitalization, we can start drawing some parallels to some of its publicly competitors in the social network space: Facebook and LinkedIn.

What we find is that while the company has a higher number of users than LinkedIn (and a lower one than Facebook), it does not do as good a job as its competitors did at IPO time to monetize its user base.

Average Revenue Per User

In businesses where most of the revenue are derived from being able to sell something related to a user, it is not uncommon to calculate Average Revenue Per User (ARPU). Today, companies in the wireless, entertainment, and digital businesses look at those values as an indicator of how much a user is currently worth to a business. Breaking things down to that level allows for an apple to apple comparison without having to worry about one company having more users than another. According to filings, the values for Facebook, LinkedIn, and Twitter looked as follows:

Facebook LinkedIn Twitter
Number of users (millions) 1150 202 218
Revenue (millions) $4300 $972 $317
ARPU $3.74 $4.81 $1.45

One thing that is apparent here is that Twitter is not being as successful as its competitors in terms of turning individual users into high dollars. This may be because Twitter’s ad business is still relatively new or it could be that it is still figuring out how to improve its own efficiency but at the current time, Twitter is making 39 cents for ever dollar Facebook gets out a user and 30 cents compared to a similar dollar LinkedIn would get. This could either mean room for substantial growth or represent a warning sign about the company’s ability to run a solid business.

To get a sense of the difference, if Twitter had the same numbers of users as Facebook (and assuming its ARPU didn’t grow), it would have made $1.7 billion (compared to the $4.3 billion Facebook made).

One may point out that Facebook and LinkedIn are more mature businesses. So let’s look at how they were doing, relatively, when they went public:

Facebook LinkedIn Twitter
IPO Year 2012 2010 2013
Number of users (millions) 845 90 218
Revenue (millions) $3711 $162 $317
ARPU $4.4 $1.8 $1.45

What’s interesting here is that Facebook looked like it was doing substantially better on all fronts (revenue, users, ARPUs) but Twitter actually comes in much closer to where LinkedIn was at in terms of ARPUs (still trailing it by 20%), which may point to the potential for explosive growth down the line, as Twitter has demonstrated. Another interesting sidenote here is that Facebook may be growing its user base but its revenue base is no longer growing at the same speed.

Average Valuation Per User

Since there is a lot of focus on user numbers, one can assume that the number of users may give us a sense of how investors value a company. So I’d like to advance the concept of Average Valuation Per User (AVPU), which take the market capitalization of a company and divides it by the number of users. This can provide a useful lens to compare user population to the valuation of a company:

Facebook LinkedIn Twitter
1st day close Current 1st day close Current Assumed at IPO
Number of users (millions) 845 1150 90 845 218
Valuation (billions) 104 124.3 9 27.46 15 20
AVPU $123 $108 $100 $136 $69 $92

Current market assumptions are that Twitter will be priced at $15 to $20 billion on its offering, which would mean each user is priced at between $69 and $92 a piece. When compare to the current value of a user at other companies (Facebook: $108; LinkedIn: $136), this does not seem terribly out of line. And when company to the AVPU those companies had at IPO time ($123 for Facebook and $100 for Twitter), it seems reasonable to expect that Twitter would end up closing north of the $20 billion valuation that has been floated about as the high end of the price range.


While Twitter’s revenue and user base appear smaller that its competitors, its IPO offering seems to point to a business that is roughly the same size as LinkedIn’s business at the time of IPO. The currently valuation range of $15-20 billion that has been floated about seem to be in line with what one might expect from a social media business of that size in revenue and user base and points to substantial opportunities for growth down the line. Once public the company will have to work hard on increasing both number of users and average revenue per user if it wants to be considered as a successful public company.


To calculate the values, I’ve take the user and revenue numbers provided by each company in their 2012 filings. In terms of valuation, I’ve taken the valuation given to Facebook and LinkedIn at the close of market on Friday and for their first day of public close on the stock market; For Twitter, I took the bottom and top of the $15-20 range rumored in many reports about the offering. I then divided those market caps and revenue numbers by the number of users to get at calculated values.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Is Twitter ready for an IPO?. You can follow Tristan on Twitter at @TNLNYC

]]> 0
The Ballmer Era ends Sun, 29 Sep 2013 22:00:34 +0000

The original address for this post is The Ballmer Era ends. If you're reading it on another site, please stop by and visit.

Steve Ballmer retires. What does his legacy look like?

is the founder and CEO of Keepskor and writes, where this was initially posted under the title The Ballmer Era ends. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is The Ballmer Era ends. If you're reading it on another site, please stop by and visit.

This week, Microsoft CEO Steve Ballmer said goodbye to the company’s employees. While he will stay on until a new CEO has been selected, we can now assess his legacy at Microsoft based on the 13 years during which he was at the helm of the company.

Financial performance

1999 2013
Share Price  $58.375  $33.31
Capitalization  $612.72B  $277.14B
Revenue in last quarter  $6.11B  $17.41B
Net Income in last quarter  $2.44B  $5.11B
Diluted EPS in last quarter  $0.44  $0.66

When Ballmer took over the company from Bill Gates on January 1, 2000 Microsoft had already started going down past its top valuation. The highest share price in the company’s history was reached on December 23, 1999 when they were priced at $58.719, giving the company a $616.3 billion market capitalization. As of this week’s close of market, Microsoft shares were worth $33.27 for a $277.14 billion market capitalization, a 55% drop.

Without any other historical references, this may look like an utter disaster but one may consider the timing of the transition and subsequent events. When Ballmer took over, the company was riding the dotcom bubble and had achieved this valuation with $6.11 billion in quarterly revenue and a net income of $2.44 billion. This gave them $0.44 in diluted earnings per share. By the end of Ballmer’s first year at the helm, the dotcom crash had wiped out the technology sector, driving Microsoft’s share price to $21.688 by the end of 2000, a low it would not break until its ill-fated effort to purchase Yahoo in March 2009.

The company survived the 2008 financial crisis relatively well, buoyed by strong revenue and a diversified set of products that allowed it to generate cash in an era when credit became very tight. In the last reported quarter, the company’s revenue had moved to $17.41 billion, generating $5.11 in net income and $0.66 in diluted earnings per share.On an annualized basis, Microsoft saw its yearly profits grow from roughly $25 billion a year to around $70 billion a year, or an average of 16.4% in annualized growth, a record that beats the performance of well known CEOs like Jack Welch at GE (11.2%), Lou Gerstner at IBM (2%) but eclipsed by Steve Jobs’ record of 33x growth (from $786 million to $25.922 billion) during his tenure as CEO.

Significant acquisitions

Over his 13 years at the helm, Ballmer approached acquisitions and external investments in a more tentative way than his predecessor did. And here again, we look at a mixed record, with smaller acquisitions often bringing radical new products into the company’s mix (Kinect; the Halo series that propelled the Xbox to leadership…) and larger acquisitions attempts that either failed (Yahoo) or turned out to be disasters (aQuantive). At the same time, Ballmer did not hesitate to flex the company’s muscle to buy large assets with large sticker prices. During his tenure, he completed 7 acquisitions with price tags north of a billion dollars:

Company Price Year Outcome
Visio $1.4 billion 2000 Part of the Office division, it remains one of the top diagramming tools available.
Navision $1.45 billion 2002 Became Microsoft Dynamic Nav, one of the company’s leading offering in the ERP business.
aQuantive $6.3 billion 2007 What was, at the time, the largest acquisition made by Microsoft and one of the largest advertising agencies in the world led to the $6.2 billion writedown in 2007.
Fast Search $1.2 billion 2008 The core technology helped develop MSN Search. MSN search has since been replaced by the Bing search engine, which is a distant #2 behind Google and Fast Search remains in Microsoft’s enterprise search products.
Skype $8.5 billion 2011 In 2012, Skype represented 34% of the international calling market and is now replacing Windows Live Messenger.
Yammer $1.2 billion 2012 Incorporated as part of the Microsoft office division but little news has been made since the acquisition
Nokia $7.2 billion 2013 (expected) Nokia has been the main supporter of Microsoft’s Windows Phone strategy. Will it turn into an aQuantive or a Skype?

Judging from the data, Microsoft under Ballmer did not seem to have a clear direction when it comes to large acquisitions and it could be this lack of focus that has led to difficulties. Attempts to incorporate large players outside of the software industry (eg. aQuantive) have not been successful, which could paint a dark cloud over the potential for the Nokia acquisition.

Strategic investments

But what about strategic investments? Prior to Ballmer, Microsoft was very active in making significant investments in outside businesses, with large positions in cable companies, telecom companies, creating a TV channel (MSNBC) and even bailing Apple out for the good of the industry. But under Ballmer, we’ve seen the company retrenching from making significant investments in other companies. The focus has been largely internal with a few rare exception. Throughout most of the early 2000s, the company sold its positions in other companies (most notably divesting its $150 million investment in Apple, which would have been worth around $8 billion today) but it did make three notable investments in the last few years:

  • In 2007, the company bought 1.6% of Facebook for what was considered to be an outrageous $240 million (this would be worth around $2 billion today)
  • In 2012, the company bought 17.6% of Nook Media for $300 million, buying itself a seat in the e-reader and tablet market. To date, little integration with Microsoft products has resulted from this investment.
  • In 2012, Microsoft also put $20 million in Klout, a company focused on establishing a user score based on their social media activity. Klout has been integrated in Bing search as a result.

Outside of those three efforts, there has been little by the way of external investments coming from Microsoft under Ballmer.


And what about products? When Ballmer took the helm, Windows was at its top, controlling over 90% of the PC market and leading to an antitrust lawsuit that damaged the company’s reputation. But as new devices like tablets and mobile phones emerged, the market share of the company’s flagship OS has dropped significantly.

However, it is not for lack of trying.

Microsoft was, surprisingly, the first company to create tablet computers, starting in 2002 with the ill-fated tablet-PC, then through the early 2000s with the UMPC, and eventually pushing the tablet concept into their OS with Windows 8 and the Surface tablet long after Apple came to dominate that market. The company was forced to write down $900 million as a result of its failure to sell more Surface computers.

In the same way, Microsoft was a proto-player in the smartphone market with its Pocket PC phone line, which was first presented in 2000, and its touch screen-based windows mobile operating system throughout the 2000s. Because it tried to push for an interface that was consistent with the experience its users had on regular PCs, the company could not go through the radical reimagining that the iPhone was and moved too late to address that issue, leaving it with a relatively small portion of a market it once dominated. The company’s acquisition of Nokia is widely seen as an admission that Apple’s strategy of integrating software and hardware may be the best approach moving forward.

Similar false starts existed in audio players (the Zune) and watches (the SPOT watch, which received data from the internet and could be see as a grand-parent to the Pebble Watch, Sony and Samsung’s smart watches and the much rumored about iWatch), two lines of product the company eventually abandoned. While competitors are readying product offerings to go after the watch market, no word has come out about future roadmap in that space from Microsoft.

But it was not all disaster.

On the consumer end, Microsoft achieved supremacy in the gaming console world with its Xbox and Xbox 360 (a refresh, the Xbox one, is slated for the fall). When it was first introduced, most industry pundits believed that the company would be crushed by the existing players: Sony, Nintendo, and Sega. Since then, Sega has left the box business, Nintendo is struggling, and only Sony remains as a major contender for the top title. From an innovation standpoint, the Kinect, which tracks a player’s motion and mirrors it in the game, has helped reestablish Microsoft as an innovator.

And the company has become a strong player in the cloud computing space with its Windows Azure cloud services, playing runner-up to a dominant Amazon in the space.

A mixed legacy

All things told, Ballmer legacy will be seen as a mixed one: while he helped the company grow to a new size in terms of overall revenue and created significant new businesses beyond its core OS and Office suite offering, a failure to adapt to substantial changes on the consumer side of the computing industry may have damaged its future prospects. Microsoft’s new CEO will be faced with a company that is radically different from any of its competitors: one that derives a substantial part of its revenue from the enterprise software world but also one that plays a substantial role in the consumer arena. Whether it should be treated as a single company or split up will be one of the hard choices he or she will have to make.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title The Ballmer Era ends. You can follow Tristan on Twitter at @TNLNYC

]]> 0
How much for mobile bandwidth? Sun, 22 Sep 2013 22:00:23 +0000

The original address for this post is How much for mobile bandwidth?. If you're reading it on another site, please stop by and visit.

How much does 1Gb of wireless data cost in the US?

is the founder and CEO of Keepskor and writes, where this was initially posted under the title How much for mobile bandwidth?. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is How much for mobile bandwidth?. If you're reading it on another site, please stop by and visit.

With data increasingly overtaking voice as the primary source of traffic on mobile networks, the big 4 providers (AT&T, Verizon, Sprint, T-mobile) in the United States have increasingly moved to offering relatively inexpensive voice and text messaging packages and different pricing schemes around data handling. This has led to a confusing landscape of prices, with different bandwidth numbers being touted at different rates. But taking the marketing data and massaging it in a spreadsheet can allow us to normalize it and present it in a way that allows for true comparisons between the different players.

Different plans, different rates

Today, mobile companies offer data plans in two different fashion: either attached to a mobile device (those are the kind of plans you get with a smartphone subscription) or detached from any device, allowing you to connect multiple ones. While each company offers bundling with voice and text service, I decided to focus on the data only plans, to remove the number of variables around voice or text message pricing that could have polluted the data (AT&T calls it “Mobile Share”, Verizon named theirs “Share Everything,”for Sprint, it’s “Mobile Broadband,” and T-mobile went with “Mobile Broadband“) Here, already, we notice that the pricing offerings differ as each provider attempts to sell a different amount of bandwidth, making it difficult to do comparisons. Put simply, it looks as follows:

Bandwidth (Gb) AT&T Verizon Sprint T-mobile
.5 $20
2.5 $30
3  $34.99
4  $30  $30
4.5  $40
6  $40  $40  $49.99
6.5  $50
8  $50
8.5  $60
10  $60  $60
10.5  $70
12  $70  $79.99 Not Available
14  $80
15  $90
16  $90
18  $100
20  $110  $110
30  $185  $185
40  $260  $260
50  $335  $335

The first thing you may notice, when you look at this chart, is that there is a big “Not available” box north of 10.5Gb per month for T-mobile. I called each company and asked them what happened if one went over their allotted bandwidth. AT&T and Verizon replied they would charge $15 per extra Gigabyte; Sprint’s price for this extra bandwidth is 5 cents per Mb (or, considering that a gigabyte is 1024 megabytes, $51.20 per gigabyte). But T-mobile does not provide any offering for going over. In their plan, if you go over your allocation, they continue serving you traffic but at a substantially reduced speed, moving you from a 4G LTE network to something called EDGE, which tops out around the same type of speeds as a traditional phone-based modem used to before broadband access became ubiquitous. So in order to be fair to all players, I removed that lower quality service from the equation.

How do they compare?

To compare the different services, I decided to look at the price of 1 Gb of service from each provider at the different plan levels. I ran the numbers two ways: first, I started with an assumption that one would go and pre-pay for more bandwidth than they would use, essentially paying a premium to ensure they would not be charged overages. Let’s say you needed 5Gb of bandwidth, then you would purchase a 6Gb to ensure that you would be OK. The result is a chart that allows us to see the per Gigabyte price of monthly bandwidth across each provider (bolded prices are the ones where the company actually offers a plan):

Bandwidth (Gb) AT&T Verizon Sprint T-mobile
.5  $60  $60  $69.98  $40
1  $30  $30  $34.99  $30
2  $15  $15  $17.50  $15
2.5  $12  $12  $14  $12
3 $10  $10  $11.66  $13.33
4  $7.50  $7.50  $12.50  $10
4.5  $8.89  $8.89  $11.11  $8.89
6 $6.67  $6.67 $10  $7.50
6.5 $9.23  $7.69  $12.31  $7.69
8  $7.50  $6.25  $10  $7.50
8.5  $7.06  $7.06  $9.41  $7.06
10  $6  $6  $8  $7
10.5  $8.57  $6.67  $7.62  $6.67
12  $7.50  $5.83  $6.67  Not Available
14  $6.43  $5.71  Not Available
15  $6.00  $6
16  $6.88  $5.63
18  $6.11  $5.56
20  $5.50  $5.50
30  $6.17  $6.17
40  $6.50  $6.50
50  $6.50  $6.70

Of course, the above data does not provide a complete view in that it does not take into accounts full overages. Yet, it is interesting to note an interesting pattern: Across any given category, Verizon and AT&T are mostly aligned on the low end of the price spectrum. Another surprising result is that the value players (Sprint and T-mobile) are relatively expensive in comparison to the largest guys. On average, across all plans, a customer will pay $11.19 per Gigabyte on AT&T, $10.79 on Verizon, $16.72 on Sprint, and $13.34 on T-mobile.


But of course, the prices highlighted above only tell part of the story. In order to get a true sense of full prices, it is probably safe to assume that an individual may go over their bandwidth allocation and pay extra overage fees. While T-mobile has dropped those charges, AT&T and Verizon both charge users an extra $15 per Gb of data and Sprint charges an extra $51.20 for the same amount (Sprint actually is granular in that it charges an extra $.05 per Mb). So, assuming you are taking the overages into account, the per Gb price of mobile bandwidth looks as follows (I’ve excluded T-mobile here as they do not provide any solution that allows for full speed service once you’ve maxed out your bandwidth allocation):

Bandwidth (Gb) AT&T Verizon Sprint
.5 60  $60 $69.98
1 $30  $30  $34.99
2 $17.50  $17.50  $16.67
2.5  $12  $12  $14
3  $10 $10  $11.66
4  $7.50  $7.50  $21.55
4.5  $10  $10  $30.53
6  $6.67  $6.67 $8.33
6.5 $8.46  $8.46  $15.57
8  $8.75  $6.25  $19.05
8.5  $10  $7.65  $22.62
10  $6 $6  $25.48
10.5  $7.14  $7.14  $27.82
12  $7.50  $5.83  $6.67
14  $8.57  $5.71  $13.03
15  $6  $6.33 $15.57
16  $6.88  $5.63  $17.80
18  $6.11  $5.56  $21.51
20  $5.50  $5.50  $24.48
30  $6.17  $6.17  $33.39
40  $6.50  $6.50  $37.84
50  $6.70  $6.70  $40.51

Looking at the data, the net impact of overages is mostly felt by Sprint users. Because of the large number of high bandwidth plans offered by AT&T and Verizon, the chances that a user would fall within an overage range are more limited, which keeps their bandwidth prices relatively stable.

Why variable pricing?

This analysis points to an interesting fact: Each provider looks at bandwidth as something that varies in price much like an airplane seat might do. This begs the question: why? After all, delivering 1Gb of bandwidth should incur the same cost for the first and for the last byte issued. Once the investment has been made in installing and powering equipment, the cost of delivering 1Gb of wireless bandwidth should be relatively stable. And yet, the telecom industry has convinced consumers that it should not be the case.

If you were to average the lowest price each of the big 4 charges for 1Gb of data you would get $11.32. If you average how much they charge across the board, that price rises to $13.01. That price gap is fairly substantial and represents a landscape that is wholly unfair to consumers.

What if, instead of charging different prices for the same bandwidth, the largest carrier just decided to charge a flat rate and maintain that rate no matter how much bandwidth a consumer used? But how much should they charge? If you look at their current price models, it looks like AT&T and Verizon can, in the best case, deliver 1Gb of data for $5.50; Sprint and T-mobile do it for $6.67. So these could be starting points: a flat 1Gb price rate, no matter what. The first company to make that kind of offer would revolutionize the telecom world as it would bring a new level of transparency to the industry. The net result would also be lower prices for consumers, which may result in heavier use and the rise of new usage types. And that could help ignite the next wave of successful technology companies in America.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title How much for mobile bandwidth?. You can follow Tristan on Twitter at @TNLNYC

]]> 3
The Real Price of a Smartphone Sun, 15 Sep 2013 22:00:16 +0000

The original address for this post is The Real Price of a Smartphone. If you're reading it on another site, please stop by and visit.

How much do iPhones, Samsung Galaxy, and others really cost?

is the founder and CEO of Keepskor and writes, where this was initially posted under the title The Real Price of a Smartphone. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is The Real Price of a Smartphone. If you're reading it on another site, please stop by and visit.

This week, Apple announced 2 new devices: one to mark the high end of its offering and a new cheaper option. But how cheap is really cheap? And how does it compare, price-wise, to other phones?


In order to assess the real prices of smartphones, I compiled a list of phones that were available on the 4 major carriers (AT&T, Verizon, Sprint, T-mobile) in the US market and, as a point of reference, I checked the price of similar devices with no carrier lock as offered on Amazon. For each device, I went with the cheapest option available on the market for that given line. This allowed me to complete a list of devices that are offered by each carrier:

  • Apple (iOS): iPhone 5S 16Gb, iPhone 5C 16Gb, iPhone 4S 8Gb
  • Samsung (Android): Galaxy S4 16Gb, Galaxy S3 16Gb, Galaxy Note II
  • HTC: HTC One 32Gb (Android), HTC 8X (Windows Phone)
  • Nokia (Windows Phone): Lumia 925 (used the 928 on Verizon as the only difference is the network type it uses)
  • Blackberry: Q10, Z10

For Apple devices, I pulled the prices off Apple’s store (for upfront prices, I contacted each vendor separately). For all other devices, I pulled the upfront and total price off their web sites.

Upfront prices

When looking at the marketing behind each device, it’s easy to see that each carrier is trying to leverage its ownership of a particular device as an advantage. For years, US consumers have paid an upfront fee that locked them into a 24 months contract with their carrier. In the last year, there has been some transparency into device pricing as T-mobile has done away with contract lock-ins, replacing them with a different type of financing. However, subsidized upfront costs still exist. This is what they look like today:


AT&T Verizon Sprint T-mobile
iPhone 5S $200 $200 $100 $99
iPhone 5C $100 $100 $0 $0
iPhone 4S $0 $50 $0 $0
Galaxy S4 $200 $200 $100 $100
Galaxy S3 $100 $100 $0 $20
Galaxy Note II $200 $250 $150 $100
HTC One $200 $200 $100 $100
HTC 8x $1 $50 $0 N/A
Lumia $100 $0 N/A $30
Q10 $200 $200 $100 $100
Z10 $100 $0 N/A $100

Looking at this chart, the cost of a new smartphone seems to be ranging from free to around $200. Apple’s three-tiered pricing (the 5S on the top, the 5C in the middle, and last year’s model at the bottom) seems to be mirrored by Samsung with Galaxy S4 as its premium phone, the Galaxy Note II as its mid-range offering, and last year’s Galaxy S3 on the bottom of its pricing scale. As such, the leader in the Android ecosystem has reproduced an approach to upfront pricing that has allowed it to gain the lead as the iOS alternative. HTC is currently fighting a battle where it has tried to price itself in the premium tier of the range, fighting the Galaxy S4 for customers.

In the back of the pack, we have Blackberry and Nokia fighting for that number 3 spot (Nokia produces exclusively Windows Phone and will soon be part of Microsoft). Here the pricing strategies are quite interesting: while Blackberry is trying to mirror a high/low approach to pricing with the Q10 and Z10 on two different ends of the spectrum, Nokia seems to be aiming for the value play, providing a device priced in the mid-tier range. And HTC is still providing last year’s version of its Windows Phone at the lowest point of the range.

Another thing of note is that there doesn’t really appear to be four different pricing approaches from the carriers: instead the market has broken out into larger and smaller players, with AT&T and Verizon charging $50-100 more upfront than Sprint and T-mobile do for the same devices. Are T-mobile and Sprint using upfront prices as a marketing tool to attract new customers or are those phones more expensive on leading carriers?

The real price of your smartphone

To answer that question, we need to take a look at the real price of a phone if you pay for it in cash upfront instead of agreeing to the kind of 24-month lock-in most carriers offer. Fortunately, each carrier presents the device prices on their sites to show you how much of a discount you’re getting. If you aggregate the data in that way, it looks like this (I’ve added unlocked devices price from Amazon as a point of reference):


Unlocked AT&T Verizon Sprint T-mobile
iPhone 5S  $649 $650 $650 $650 $649
iPhone 5C  $549 $550 $550 $550 $549
iPhone 4S  $450 $451 $500 $450 $450
Galaxy S4  $610 $640 $600 $600 $604
Galaxy S3  $390 $465 $450 $550 $452
Galaxy Note II  $530 $575 $600 $650 $580
HTC One  $608 $600 $600 $550 $604
HTC 8x  $300 $425 $400 $400 N/A
Lumia  $479 $430 $500 N/A $510
Q10  $580 $585 $550 $530 $580
Z10  $355 $440 $450 N/A $532

Looking at these prices, smart phones no longer look quite as cheap but what is particularly fascinating is how aligned the pricing schedules are. There is relative consistency in how much you will pay for a given model across all carriers, with no carrier being cheaper than the others. One would think that there might be an opportunity there for a carrier willing to move some marketing dollars towards subsidizing pricing on a hot mobile device, potentially selling it at a loss to acquire customers.


Unlocked Average Median
iPhone 5S $649 $650 $650
iPhone 5C $549 $550 $550
iPhone 4S $450 $463 $450
Galaxy S4 $610 $611 $602
Galaxy S3 $390 $479 $458
Galaxy Note II $530 $601 $590
HTC One $608 $588 $600
HTC 8x $300 $408 $400
Lumia $479 $480 $500
Q10 $580 $561 $565
Z10 $355 $474 $450


In fact, looking at the data in more detail, and drawing some averages across all the carriers, it appears that the full price of this year’s devices is no more expensive nor cheaper than unlocked devices. (However if you are looking for last year’s model, you are better off buying an unlocked device and bringing it to the carrier as prices are consistently higher on older carrier-sold phones.)

How much for how much?

Given the relative stability in pricing among the different carriers, we can get a sense of price points for a top line product. Looking at the manufacturers’ pricing, the top line offerings are iPhone 5S (Apple), Galaxy S4 (Samsung), HTC One (HTC), Lumia 925 (Nokia), and Q10 (Blackberry). But how do they rate price-wise compared to other phones in the same range? The average price for unlocked premium phones is $585 (the median is $608) compared to $578 (and a median of $588) for carrier-locked devices. Apple, of course, is priced on the high end of the spectrum, but surprisingly Samsung and HTC have also pushed themselves above the average, with Nokia substantially pulling everyone down.

Let’s run the same numbers on the cheapest devices: iPhone 4S (Apple), Galaxy S3 (Samsung), HTC 8X (using the HTC phone here as no Nokia model from last year is consistently available, leaving this device as the cheap Windows Phone option), and Z10 (Blackberry). The average price for an unlocked “cheap” smartphone is $398 (median is $372) compared to $478 (and a median of $477) for carrier-locked ones. Surprisingly, the only phone to go above that average is the iPhone 4S, a phone that is older than all the other ones in the list. But here is where things get interesting: Apple’s iPhone 5C is 38% more than the average cheap phone (compared to a 16% premium for the iPhone 5S over other “luxury” phone prices and a 13% premium for the iPhone 4S over other “cheap” phones).

So while the C may be marketed for its color and rumors were that it would standing for cheap, the real meaning of it may be “cash is cool.”

is the founder and CEO of Keepskor and writes, where this was initially posted under the title The Real Price of a Smartphone. You can follow Tristan on Twitter at @TNLNYC

]]> 0
12 Wed, 11 Sep 2013 12:46:19 +0000

The original address for this post is 12. If you're reading it on another site, please stop by and visit.

12 years since 9/11

is the founder and CEO of Keepskor and writes, where this was initially posted under the title 12. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is 12. If you're reading it on another site, please stop by and visit.

I couldn’t vote that day, even though it was election day. The reason I couldn’t vote was that I was still a foreigner, having neglected doing the necessary paperwork to acquire citizenship. Sure, New York was my home but it was the home to some many foreigners like myself so why worry. America was wide open to all who came and help and citizenship was a nice addition but hardly a requirement to do anything.

We knew Bloomberg as a business person but few cared about the mayoral race. The Giuliani era was ending with the kind of exhaustion that usually comes with longer cycles of leadership: a mayor who had over-extended, someone who had started out OK but was mired into scandals relating to an extramarital affair and a messy divorce. On the democratic, the front runner was Mark Green, a successful public advocate with relatively liberal policies.

So, like most New Yorkers on that day, the elections weren’t that close to my mind.

A routine

On a sun-drenched morning, I went through my relatively new daily routine, having moved offices to the Pavonia development center, a stretch of buildings overlooking downtown Manhattan from the New Jersey side. I took the PATH train to World Trade Center, passed through that station around 8am, picked up a cup of tea at the local Starbucks, took a look at the beauty that downtown New York was then went upstairs to work, sitting a desk where my computer was turned away from the window because of the sun’s glare during the day.

“A plane’s hit the world trade center” someone in the office remarked. I turned around and we could see black smoke rising from the northernmost tower.

“Like a twin propeller?” I asked.

“No, it was way bigger than that.”

What an odd accident, I thought. The previous week, I had chatted with a co-worker about the towers and how they were the kind of symbol terrorists enjoyed to blow up. My co-worker said something that, in retrospect, was very spooky, “if this were a movie, they’d take a plane and run it in them: that’s how Hollywood would blow that up.”

We had great seats at the apocalypse and were watching with the kind of distance rubberneckers have from highway accidents: a morbid curiosity mixed with fear and disgust.

Then the second plane hit and we knew things were different. We knew things weren’t good and I knew we were under attack. From whom? Why? All that would become clearer but one thing was sure, things were going to be different.

A momentary lapse of reason

Americans don’t cancel elections. But on that day they did. The primaries were called off and moved to a later date as the city swung into action to deal with one of the biggest crisis it ever faced. As Giuliani went from lame duck to hero, the rest of the country descended into a momentary insanity that we’re still paying for today. The fears of people outside of New York were used to justify the beginning of the largest effort in increasing the surveillance state this country had ever seen. Wrapped in patriotism and painted red, white, and blue, bills after bills became laws that would establish a permanent state of fear and expand secret unchecked powers for intelligence agencies.

In 1979, the FISA court, a secret court to deal with surveillance warrants, had been established to monitor the intelligence community. Over the next 22 years, the court would review and approve 13,102 warrants. From 2001 to 2012 (the last reported date), the court has reviewed 20840 warrants an rejected a grand total of 11. And that is the part of our intelligence apparatus that works best.

The recent revelation by NSA whistleblower Ed Snowden have shown that searches performed without warrants are easy to perform and fairly common. Forget the discussion of whether people are looking at metadata or data (after all, if I know your location metadata and the phone numbers you make, I already know a lot about you). During the first 6 months of 2012, Facebook received and complied with requests impacting ”between 18,000 and 19,000″ user accounts; Microsoft’s reported that such requests impacted between 31,000 and 32,000 of their consumer accounts; Google puts their number north of 33,000. Bottom line, somewhere around 100,000 accounts were opened up to the government while somewhere around 1,000 warrants were issued.

Yes, security is important; Yes, we want to stop terrorist networks; But at what cost? 100k+ searches, 1000 warrants: something doesn’t feel right about that ratio.

Why politics matter

In 2008, I finally got my citizenship and voted. Worn out by a war that I felt had been unjustified before if even began (I’m talking about Iraq, not Afghanistan; Afghanistan was a clear target, Iraq never was), I felt I needed to make my voice heard, to get my vote counted. So I voted for the person who promised to end those wars, the person who promised a new direction, the person who campaigned on the promise of rolling back some of the intelligence overreaches that existed. He won. And then Obama turned his back on his promises: he secretly expanded the powers of the intelligence community; he pulled our troops out of Iraq so he could move them to Afghanistan, then Libya, and now maybe Syria. In the old bad days of the unsafe New York, we would have called that a three-card monty trick.

I also got a chance to vote for mayor. And I voted for Mike Bloomberg. Far from 2001, Bloomberg had proven to be the kind of responsible leader that makes a city run. One may disagree with the approach he took (I don’t but many in New York do) but one cannot deny the results: a city that is safer; a city that runs better; a city with a more diversified economy; a city with a better balanced budget; a city with a mayor beholden to few.

And like on 9/11, the city went through a primaries exercise. As a registered democrat, I had to go out and pull the lever (while we’ve moved to electronic balloting, this year the primaries had the old machines) and I found myself very conflicted. On the day before the election, I was still that scourge of pollsters everywhere, the kind of person that the press talks about incessantly and most people sneer at: I was an undecided voter. The funny thing was that I knew who I was going to vote for in some of the other races but mayor, well , mayor was just hard this year.

So there I was yesterday, sitting behind the curtain trying to figure out who to vote for.

The democratic front-runner was Bill DeBlasio, a public advocate with relatively liberal policies (some things never change). His message had been that he wanted to make a break from the Bloomberg era. Bill Thompson and Christy Quinn presented themselves as more moderate choices but their deep ties to unions worried me.

Then there was John Liu: he has done a very quiet but very nice job as comptroller, improving the city’s funds while using his office to impact schools (by funding the removal of PCBs through bonds as a long-term cost-saving measure). His program was not mired into grand gestures but in small incremental improvements: better energy efficiency, shorter sidewalks, minimum wage increase, basically the kind of small bore efforts with large impacts Bloomberg had undertaken over the last 12 years. His comptroller’s office has used technology to improve the city in a number of ways, getting things done when others didn’t. On the other hand, his campaign had been mired by the 3-years investigation into campaign finance, the kind of thing that denied him matching fund. With little money to get his message out, he was nowhere near the front-runner space.

So I voted my conscience and pulled the lever for the guy most aligned with my beliefs, knowing that he wouldn’t move to the next round. I voted with the hope that I could send a signal to the front-runner that the center matters, that Bloomberg’s legacy of incremental improvements based on data is not one to throw out but one to embrace. I voted in the hope that politicians running in primaries look to what other segments of the population they need to embrace in order to win the general election.

So why this story about elections? Because politics matter. In 2000, Bush was elected president and, after 9/11, he took the country in a direction that I still feel is radically different from what Al Gore might have done in a similar scenario; In 2001, Bloomberg was elected and worked hard to heal a city that had been deeply wounded; The cynic in me could say that we voted for Obama in 2008 and things didn’t change but the optimist in me thinks that we can, and we must, make our voice heard and get him to change direction.

12 years ago, the flames in our city were used to fan a call for war. 12 years later, we’re having a new discussion about a new war. 11 years ago, the buildup to war in Iraq was based on dubious evidence and led to more American deaths than what happened on 9/11.

Today, we have evidence that chemical weapons were used but no evidence tying Bashar al-Assad to them. The man spies on his people, tortures his enemies, and generally ignores human right: he’s a ruthless leader who will do anything to hold on to power, even throwing his country into a bloody conflict. In other words, he’s a horrible individual. But if going after horrible leaders is what we should do, then we will be a state of constant war around the globe. And recent reports have surfaced from German intelligence that he may not be approving of what his army did with chemical weapons, and may have denied them the right to use them. If that’s the case, then maybe the evidence before us is dubious and if that’s the case, then maybe we should just stay out.

12 years on, and we’re still at war; 12 years on, we should move the discussion to pulling things back, incrementally, to the state our country was in prior to 9/11, a state of optimism, a state of potential, a state that left our country generally more prosperous while balancing civil liberties, economic ones, and the right of every American to be protected.

In Memoriam

Car­los Dominguez, Mark Ellis, Melissa Vin­cent, Michael DiPasquale, Cyn­thia Giugliano, Jeremy Glick, David Hal­der­man, Steve Wein­berg, Ger­ard Jean Bap­tiste, Tom McCann, David Vera.


This post is part of a continuing series in which I remember those I knew who were lost on that day. Here are the previous years: 2012201120102009200820072006200520042003, and 2002. For context, you might want to read The day after, which is about as raw as one can get about that day as I wrote that piece less than 36 hours after the first plane hit. This is the longest series I’ve ever written and I expect to continue yearly until I can no longer write.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title 12. You can follow Tristan on Twitter at @TNLNYC

]]> 0
An Apple Bloodbath Sun, 08 Sep 2013 22:00:09 +0000

The original address for this post is An Apple Bloodbath. If you're reading it on another site, please stop by and visit.

It looks like Apple is about to refresh most of its iOS line.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title An Apple Bloodbath. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is An Apple Bloodbath. If you're reading it on another site, please stop by and visit.

On Tuesday, Apple will unveil the new version of the iPhone and may even unveil a number of new versions but lost in the shuffle of new product announcements could be the fact that the company is going through the most significant set of product updates on the iOS line in a very long time. Let’s look at why I predict a complete line refresh this week.

Simply supplies

Over the last few years, Apple’s supply lines have evolved substantially, increasingly spreading across a variety of connectors, processors, and screen sizes. Take a close look and there are no less than 6 iPod variations, 5 iphone ones, and 16 iPad models to choose from (and that’s even before one considers the different color options.)

With 27 iOS powered products (not including Apple TV) on the market, Apple now supports supplies for at least 4 different computer chips (A4, A5, A6, and A6x), at least 3 different types of connectors (lightning, 30 pins, and shuffle), 4 different camera processors, and 7 different types of screens across 5 different dimensions.

Let’s assume that you’re a supply chain expert and think about how you’re going to increase margins on all your product lines: the primary way to do so is to reuse as many components as possible across all your product, a move that gives you greater leverage when making large scale purchases. You know you cannot do much about the form factor of many of your products but there’s a lot you can do inside them.

So the first thing you might look at is the processors, which are often one of the most expensive components in your bill of materials. Out goes anything powered by an A4 processor.

But can you do something even more radical when it comes to processors? For example, you could look at moving your A5 processors out, by upgrading everything to at least an A6 chip.

Then you look at your connectors: You’ve made a bet on a new proprietary approach and most of the vendors of accessories have adapted over the last year. So you say goodbye to anything without a lighting connector.

Your camera processors are also an area where you can get much leverage, especially if you tell consumers that you’re upgrading everything to the better quality ones.

A better resolution?

Having simplified the product lines by retiring some older products, you are now left with a challenge on resolution. Your screens range from the 3.5 retina display of the iPhone 4 to the 9.7 inch one of the iPad Retina. Resolutions across iOS devices force developers to ensure that their graphics must look good on the following resolutions:

  • iPhone 4 and 4th Generation iPod Touch: 960 by 640, 326  pixels per inch (ppi)
  • iPhone 5 and 5th Generation iPod Touch: 1136 by 640, 326 ppi
  • iPad mini: 1024 by 768, 163 ppi
  • iPad 2: 1024 by 768, 132 ppi
  • iPad Retina display: 2048 by 1536, 264 ppi

So this means that developers today worry about 5 different resolutions to make their app look good on iOS. A new set of product offerings can help reduce that diversity to only two different resolutions (or 3 at most) through judicious excising of certain products. Not only will parts be cheaper (as bought in bulk) but the move will make developers much happier as testing cycles will decrease.

The walking dead

Of course, there is always the argument that older products are due to be retired. There has been much complaint about Apple’s aging product line and here comes the opportunity for a substantial refresh, the kind that will touch most of the products the company offers.

So having taken all of the above in mind, it looks like Apple is preparing for one of the biggest refresh in the history of iOS.

Products that would be killed include:

  • Retina display iPad (4th generation): As the only iPad running on an A6X processor, it is bound for a rapid end of line.
  • iPad 2: It runs a dual-core A5 processor, its resolution is out of line with all the other products and, at 30 months old, it is one of the oldest iOS devices still sold by Apple. With a price point ranging between $399 and $529, it sits awkwardly between the iPad Mini (which sells for $329 to $659) and the iPad Retina (which retails for $499 to $929)
  • iPhone 5: As the only device running on an A6 after the upgrades, it looks like its chances of survival are limited.
  • iPhone 4 and 4S: Powered by the A4 and A5 processors, those are the low end of the iPhone line. However, they also represent a challenge as they offer a different sized screen than the iPhone 5, used 30-pin connectors, and Micro-SIMs instead of the mini-SIMs introduced in the iPhone 5. This is where the rumors of a cheaper iPhone come in: While the iPhone 5 may remain on the market at a $99 price point, the demise of the iPhone 4 as the “free” option leaves a gap in the product line. The new cheaper iPhone could help fill some of that gap.
  • 4th generation iPod Touch: This one has been a bit of a surprise, as it somehow survived cuts when the next generation was introduced. With a different screen resolution and the news that iOS 7 will not run on it, the writing is on the wall here. A question remain as to what might replace it. Will it be a price drop on the lower end of the 5th generation model? Or a whole new product?
  • iPod Classic: The venerable scroll wheel iPod is now officially at the end of its possible road. Its click wheel is not used in any other product Apple offers, its dock connector has not been upgraded, its screen is a different resolution from any other Apple product, and its hard drive is not a solid state one.
  • iPod Nano: With a pedometer, Nike+ support, and FM radio, the nano is aimed at a particular subgroup of users: people who exercise. As there has long been rumors that Apple was going to come out with a watch, it seems that this product would now be superfluous. Expect it to be replaced by the new iWatch.
  • iPod Shuffle: The rumors of an iWatch will make the portability of the iPod shuffle a thing of the past. With no touchscreen, and few upgrades in the last few years, the Shuffle may finally be at that point where Apple can move forward without it.

Meet the new crew

With so many products seeing the end of the road, the potential for huge gaps in the product line exist so we will see some upgrades.

  • iWatch: Long rumored as an iOS device that one would wear on their wrist, the iWatch fits perfectly in the $149 slot occupied by the iPod Shuffle. This will be the new entry point device for the iOS line. It will include a 2 inch display, pedometer, audio and video player, Siri, Nike+, and will run iOS apps that can fit on its 16Gb of space. The whole thing will be powered by an A-5 processor and a 32Gb version will be available for $249.
  • 128Gb 5th Generation iPod Touch: While this will not be mentioned during the keynote, this new larger capacity touch will come in at a $399 price point (the 64Gb version will drop to $299 and the 32Gb will drop to $229). Apart from the larger storage space, it will have exactly the same specs as other 5th Generation iPod Touch products.
  • iPhone 5S: The star of the presentation. With a new 64-bit processor (the A7), the new phone will not only run faster but also be able to address more memory, with 128Gb becoming the new high-end version (for $399), 64Gb being the middle one (for $299), and 32Gb the low end (for $199). The phone will also include a fingerprint sensor (embedded in its front button), a better motion sensor (for motion tracking above the screen), a pedometer (similar to what Samsung did with the Galaxy S4), and a LED flash on the back-facing camera. The S will stand for sensors, not speed.
  • iPhone 5C: This device, available in a variety of colors, will be the “cheaper iPhone”. With 16Gb of space, no sensors, and an A7 processor, it will replace the iPhone 4S at the $99 price point; an 8Gb version will be available at the “free” pricepoint, replacing the iPhone 4.
  • 2nd generation iPad mini: This will be a modest upgrade, with a bump in the processor speed to the dual-core A7 that powers the new iPhone. Most significant in this upgrade, however, will be the move to a retina display from its traditional one.
  • 5th generation Retina display iPad: Sporting the new A7 chip, a better motion sensor, a fingerprint sensor (and an innovative way to manage profiles based on fingerprints), this iPad will essentially be a large version of the iPhone 5S. Most significant here is that we will see a price drop of about $100 on each model, with the new iPad starting at $399 and topping out at $829.

Simplification to conclusion

With this realignment, many improvements will have been made to Apple’s supply chain, improving its overall margins. In doing such a refresh, the company will have eliminated any non-iOS devices from its consumer electronic line (the death of the original iPod lines will increase margins across the board), dropped any non-lightning connectors, simplified its processor map (the A6 and A6x won’t be part of the new line, replaced by the A7 chip altogether), and decreased the number of screen resolutions it offers (to small, smallest, and larger).

On the low end of the spectrum, the company will have a tiny screen that is worn on a wrist (probably 512 by 384 at 264 ppi, mirroring the iPad dimensions), a 1136 by 640 at 326ppi resolution for its phones (and iPod Touch), and a 2048 by 1536 resolution at 264 ppi for its ipads. Developers will be happy to see that the simplified approach will remove headaches when trying to build apps that can run on the new variety of devices.

The price lines will remain the same, with the notable exception of the entry-point iPod shuffle disappearing, as well as the high end of the spectrum opening up for a potential new product to be introduced in the future.

With the long delays in a refresh on most iOS product lines, it seems the above case would present the optimal mode for the company and we will find out on Tuesday if the guesses are correct.


is the founder and CEO of Keepskor and writes, where this was initially posted under the title An Apple Bloodbath. You can follow Tristan on Twitter at @TNLNYC

]]> 0
How much is a user worth? Sun, 01 Sep 2013 18:00:47 +0000

The original address for this post is How much is a user worth?. If you're reading it on another site, please stop by and visit.

How much do investors price users at?

is the founder and CEO of Keepskor and writes, where this was initially posted under the title How much is a user worth?. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is How much is a user worth?. If you're reading it on another site, please stop by and visit.

User growth is often cited as reason behind the valuations companies in the tech sector are often given. But how much should an individual user be worth? Are all users created equal? And is the market’s assessment of growth in user sufficient to estimate the value of companies? Let’s delve into the numbers and try to see of user growth should be high a factor in evaluating companies as it is today.

What companies?

To get a sense of how those numbers are assessed, I decided to focus on publicly traded companies in the tech sector. I then narrowed my lens down to companies that were deriving the majority of their revenue from selling large numbers of eyeballs to advertisers. This created a basket of 4 stocks that were:

  • publicly traded
  • consumer-oriented
  • receiving the majority of their revenue from online advertising
  • trying to leverage the social network effect of large audiences

The final four companies were: Facebook, LinkedIn, Yahoo, and Google.

What numbers?

Because we are dealing with publicly traded companies, we have a large offering when it comes to numbers that we could use. In order to get some sense of normalization and comparable data, I decided to focus on the last quarterly financial report for each company, which allows us to roughly normalize data across 3 values: number of users the company had at the time of the report, revenue the company had, and its market capitalization on that day.

For the number of users, if it was not available in the annual report itself, I looked at reports on that number around the time the annual report for made. Because each company treat reporting of users differently, I went with the largest number they reported. For example, in the case of Google, the company and the media reported numbers of 450 million gmail users, 400 million Google+ users, 900 million Android users, and 1.3 billion Google search users. As a result, I took the 1.3 billion value as it is the largest of the set, possibly encompassing substantial overlaps with all the other numbers.

Revenues were announced by press release so I’ve taken those numbers straight from the reports the companies made. When it comes to those, I focused on the revenue directly attributable to the internet. This was only an issue with Google, where I removed the $998 million in revenue from their Motorola unit, because it largely comes from handset sales instead of the internet.

For market capitalization, I’ve taken the data from Google Finance on the most recent market date (yesterday). This means that those numbers are not fully aligned with the actuals on total number of users but should be directionally correct from an alignment standpoint.

Based on the values I gathered, I decided to estimate the value of an individual user by taking the market cap and dividing it by the number of users. I also decided to calculate the average revenue per user (ie. ARPU) by taking the revenue number and dividing it by the number of users. This gave us two indicators that can be useful in that it provides both the longer term expected value of a user as well as the current revenue per users, which is a more conservative measure.

All data included here was compiled from public sources, I did not use any internal information for any of those companies so you’re free to go and Google for similar data.

On to the data

With all the disclaimers above, it’s now time to take a look at the data.

Company name Facebook LinkedIn Yahoo Google
Market cap (in billions) $100.56 $31.31 $27.67 $282.20
Number of users (in millions) 1,110 225 627 1,300
Revenue (in billions) $1.813 $0.366 $1.135 $13.110
Per user valuation $90.59 $131.55 $44.13 $217.08
Average Revenue per User (ARPU) $1.63 $1.53 $1.81 $10.09

Looking at this data, the first thing that one notices is that, with the exception of Google, most of the companies on the list have relatively low average revenue per user user. While ARPU in the internet space are generally thought to be decent if they are over $2, it appears that Facebook, LinkedIn, and Yahoo still have some ways to go before they get there. On the bright side, if they can convert those users to mobile users, they may have chances at strong revenue as the cost of user acquisition on mobile devices has recently risen to $1.80, giving Facebook and LinkedIn a fair amount of room for growth if they can find ways to present their audience to mobile apps.

Google has shown that its monetization engine is a finely tuned machine that generates money hand over fist and that appears to be represented in its overall valuation, which shows the company to be valued at more than Facebook, LinkedIn, and Yahoo combined.

Normalizing the data

Let’s now take a look at what happens when we normalize that data to get at average and median values. The idea here is to get a sense of who’s batting above average (the number clearly show Google is there but who else) and what are the long-term expectations investors have for the revenue generated on those users:

Average Median
Market cap (in billions) $110.44 $65.94
Number of users (in millions) 818.75 868.5
Revenue (in billions) $4.11 $1.47
Per user valuation $120.84 $111.07
Average Revenue per User (ARPU) $3.76 $1.72

Based on this data, it is not all that improbable that the big players would generate between $1.72 and $3.76 per user per quarter (or between $6.89 and $15.06 on an annualized basis). What may be more difficult to entertain is those companies would hold on to users for the 8 to 16 years that would justify the current valuation based on forward revenue.

One thing that is clear is that the game of arbitrage currently happening on mobile platforms, where users are acquired at a certain price (currently around $1.80 per user) in order to return revenue over the lifetime value of the user (known as LTV) is going to have to change moving forward as acquisition prices will probably continue to increase when investors push those large companies to derive more revenue from their audiences in the mobile space.

Businesses focused solely on mobile should work on figuring how they can achieve LTV that move north of $2 in the near term and probably closer to $4-5 in the long run if they expect to survive in the long run.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title How much is a user worth?. You can follow Tristan on Twitter at @TNLNYC

]]> 0
Is Google Killing Cable? Sun, 25 Aug 2013 22:00:07 +0000

The original address for this post is Is Google Killing Cable?. If you're reading it on another site, please stop by and visit.

Chromecast may be the future of TV

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Is Google Killing Cable?. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is Is Google Killing Cable?. If you're reading it on another site, please stop by and visit.

New York city is the latest battleground for revenue between TV giants: While large broadcasters have recently been setback by their loss to Aereo, the company that provides legal stream of over the air broadcast signals on the internet, giants Time-Warner Cable and CBS have been fighting, leaving their customers without programming from CBS.

But as cable companies and TV broadcasters are battling to increase the amount of dollars each gets from anyone who purchases a cable service subscription, companies like Apple and Google are quietly moving to replace the old system.

Apple took the first tentative steps with the AppleTV, a small device that allow users to stream content from certain apps on iOS-enabled phones and tablets directly to their TV screens. Priced at $99, the device essentially turns your TV screen into a second screen for the content you consume on mobile devices.

Google has decided to enter the space with Chromecast, a device priced at $35 that allows anyone to stream content from Chrome web browsers (available on PC and mac), Netflix, and YouTube. For less than going out to movies, a couple can essentially internet-enable their TV screen and use any mobile device (iOS, Android) or computer running Google Chrome (Windows or OSX) to stream internet content.

Both AppleTV and Chromecast are relatively simple to set up for anyone and the controls are simple enough that anyone who understand the basics of using a web browser, tablet, or smartphone can easily send video to the screen. Because of how easy and cheap they are to setup, the two devices point to a world where cable TV subscription will no longer be as relevant as it is today, with internet connectivity becoming the dominant data stream into the house.

If you look at the recent effort on programming from Netflix, it is clear that today’s successful TV channels do not need to have a physical programming grid that ties them to a rigid schedule, nor do they need to be distributed as part of a cable TV package. Netflix today, from a product offering, is getting increasingly hard to distinguish from HBO, except for the fact that its subscribers get to choose what they watch and when they watch it.

As has long been my contention, there are very few types of programming that justify real-time broadcasting: news, sports, and award events represent the exception and here, companies like Al-Jazeera have proven that you can build a following by doing live streaming over the internet (unfortunately, upon its purchase of the Current.TV cable channel, Al-Jazeera has decided to abandon its pioneering ways, attaching itself to the dying traditional model instead).

With some of the most expensive cable TV channel representing almost $5 in a consumer’s bill (whether they watch it or not), there is room for a very different model, one where individual channels could offer their content online (many already do) for a fee within subscription apps that could be send to the TV with AppleTV or Chromecast.

Aereo, for example, already offers the over-the-air broadcasts (about 20-30 channels) as an app that runs on iPhone and iPads and can beam to AppleTV and it is safe to assume that they will be offering something similar on Android and Chromecast in the not too distant future.

If individual TV channels were to charge a membership fee similar to that offered by Netflix ($8), they could increase their overall revenue while leaving cable TV providers behind. Infrastructure costs have been rapidly dropping and companies can now purchase such turnkey infrastructures from third party companies like Brightcove or MLBAM, the digital arm of the MLB, for relatively small per-user prices. Channels aggregating streams over the internet would then have substantially more information about what their viewers are watching and when, giving them the ability to decide what to fund and what to kill, and whether to charge higher advertising rates for one show versus another.

Such a breakdown will happen and one can assume that it will happen relatively soon if incumbents do not want to be defeated by newcomers. In the 1980s and 1990s, large companies like Turner Broadcasting, Hearst TV, HBO, Starz, and Cinemax were built on the new technology of cable TV, displacing traditional broadcasters as the leaders in the TV space. It is only a question of time before such thing happen again but with the Internet as the distribution network instead of the regular cable aggregators consumers have gotten used to.

And when that happens, the incumbents may find themselves with little choice than to join the newcomers.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Is Google Killing Cable?. You can follow Tristan on Twitter at @TNLNYC

]]> 2
Apple Fingers Payment Revolution Sat, 17 Aug 2013 22:35:29 +0000

The original address for this post is Apple Fingers Payment Revolution. If you're reading it on another site, please stop by and visit.

Apple fingerprinting: more than just ID?

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Apple Fingers Payment Revolution. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is Apple Fingers Payment Revolution. If you're reading it on another site, please stop by and visit.

It was an innocent enough remark but it triggered a storm of outrage: Earlier this week, New York mayor Mike Bloomberg advanced the idea of using fingerprint-based locks in public housing buildings, generating a storm of outrage from candidates seeking to replace him.

Meanwhile, the tech world has been driven to excitement by what appears to be the appearance of a fingerprint scanner in the next version of the iPhone.

So the question now should be: will the fingerprint scanner in the new iphone merely be a gimmick that will remain largely unused and forgotten a year from now or will it herald a new age of respect for a centuries-old technology?

In the US, fingerprinting is used in a number of areas but the one that resonates most with the public is the criminal space. People working for the government, people working in Finance, or people with access to all sorts of “limited access” information are routinely fingerprinted; Members of the military and civil employees get fingerprinted; Parents applying to adopt a child are fingerprinted; People getting a green card or applying for citizenship are fingerprinted; Some Windows-based PCs have had fingerprint scanners built-in for almost a decade and it is becoming increasingly common to include fingerprint scanners in doorlocks to rooms in banks and government offices. Yet large portions of the US public are opposed to fingerprinting, fearing that it will lead to the state building up a large database of identification of individuals around the nation (of course, this seems to be OK for the social security administration, which has a number that has become the ubiquitous way of providing context for information to employers, banks, mortgage agencies, etc…).

By contrast, people overseas are fingerprinted on an even more regular basis. For example, European citizens applying for a passport all provide their 10 fingerprints at application time and that information is then record on a chip that sits inside the cover of their passport (such information can be used at border control in every country, including the US). The same is true for identity cards that are common through most of the developed world.

So the introduction of fingerprinting sensors within the next generation of iPhones seems like a feature that could outrage Americans but be useful to everyone else. The Chinese Qin dynasty (around 200BC) used fingerprints in clay tablet as a way to authenticate official documents and help solve burglary investigations but it wasn’t until the late 19th century that the concept took hold in the west, mostly for criminal purpose.

Since the early 2000s, fingerprinting has increasingly be used for non-criminal purpose outside the United States. In the UK, for example, schoolchildren are routinely fingerprinted, with digital matching of their fingerprint being used to check out books at school libraries, register class attendance, or pay for school meals.

iPhone fingerprinting

Last year, Apple acquired Authentec, one of the leader in fingerprinting technology, for $350 million (one of its largest acquisitions), signaling that the company is serious about this space. Authentec is the type of the company you don’t hear much about in the news because it made technology that was largely embedded into other devices. In fact, at the time of the acquisition, the company was providing technology that powered Lenovo and Samsung laptop’s fingerprints scanners, the kind of technology that, over the last year, has disappeared from those product lines. The company was also in the process of switching to offering a full suite of security services in the mobile space, with Samsung as one of its biggest customers there.

Apple could have, like other companies, gone out and licensed the technology for inclusion into its devices but it felt this was important enough a piece of technology to buy it out (witness, by contrast, how Apple licenses technology from Nuance to power Siri). The disappearance of the technology from many of Apple’s competitors’ lineup seems to point to the acquisition having brought a violent end to licensing agreements relating to fingerprint scanners.

But why would Apple consider a technology that is largely seen as negative? Well, for starters, Apple always looks at things differently and it often can convince its consumers to modify their behavior. With a treasure trove of hundreds of millions of credit cards, Apple sits on a digital mine when it comes to payment data. The last thing it needs to do in order to create a new revolution around payment is find a way to identify every user in a unique fashion and fingerprints have, for over a century, proven to be that type of technology. So the key to profile management on the next generation of electronic devices may be fingerprinting and Apple has bought itself a large seat at the table.

Imagine, for example, turning on your iPad by pressing on the button at the bottom of the screen but this time things are different: the application your girlfriend or wife had installed on it are no longer there or the games your kid downloaded don’t show up: instead it’s your own space, tied to your fingerprint and when your kids come, it’s their space tied to their fingerprint and so on. Each desktop now aligned to a single profile attached to a fingerprint.

Go one step further and take your phone to a merchant like Starbucks. The merchant’s register is tied to your iPhone (they do this already with an app) but now all you have to do to make a payment is press that fingerprint scanner and your data is authenticated, tied to your credit card and payment is made with proof of ID. Square may have a hard time competing with such a solution and Apple could revolutionize how money is traded.

Add in Airdrop and that fingerprint scanner can become a way to beam money from your device to a friend’s device when splitting the check at a restaurant. No cash required, with all banking managed by Apple’s iWallet, which already includes your credit card information.

Walk into the Apple store and buy a new device: To pay today, you hand your credit card and it is swiped into a Verifone custom sleeve on the iPhones Apple employees carry in the store. Tomorrow, you just press the button to give them your fingerprint and the data that already identifies you on the Apple servers and clears the transaction: No need for any NFC chip, or even credit card to make those payments, just a finger.

If Apple presents those use cases, they will be hailed as revolutionary and Apple will get an early lead in the next generation of payments and profile management tools (in 2011, Motorola tried to include a fingerprint scanner in its Motorola Atrix phone but the feature was so unreliable and had so few use cases attached to it that few consumers ever used it). Those efforts could be extended to the security space (eg. use your fingerprint instead of a password to unlock your phone), the general identity space (use your phone as an approved form of ID in corporate environments) and thus give the company a new level of control in the corporate arena.

Along the way, the company could essentially demystify fingerprinting and build one of the largest profile databases in existence.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Apple Fingers Payment Revolution. You can follow Tristan on Twitter at @TNLNYC

]]> 1
Apps Economics: Fool’s Gold? Sun, 11 Aug 2013 22:00:04 +0000

The original address for this post is Apps Economics: Fool’s Gold?. If you're reading it on another site, please stop by and visit.

Can developers make a living on apps?

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Apps Economics: Fool’s Gold?. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is Apps Economics: Fool’s Gold?. If you're reading it on another site, please stop by and visit.

The last few years have seen an unprecedented number of people rushing to develop mobile apps for iOS and Android. But looking at the installed user base on each platform and information on the payouts made by the different companies, it appears that the vast majority of developers will find themselves with little revenue to show for.

Google, Apple, Microsoft: Rich in users, developers

The consensus around the industry is that Google dominates the mobile market with 900 million users, while Apple follows with 600 million iOS devices purchased, and Microsoft comes into third place with an estimated 12 million Windows Phones sold (the vast majority of those, 81%, being sold by Nokia).

With different forums aimed at attracting developers, each company handles announcement of the size of their markets differently.

Apple, at its WorldWide Developer Conference, talked about 1.250 million apps in the app store accounting for 50 billion downloads and $5 billion paid off to developers. To the company, it is a sign of pride to be able to pay out this developer community and internal data from the app store, gathered from sources close to the company, indicate that the numbers are in line with the actual payments made to developers.

At Google I/O, the largest Android developer conference, Google touted 150 thousand developers responsible for over 800,000 apps. While the company does not break out revenue numbers on their apps, recent data in their financial filing seemed to indicate somewhere around $900 million in pay-outs to developers and discussions with external research analysts put the number of downloaded apps from the Google Play store at around 48 billion, close to what Apple has claimed.

Microsoft, meanwhile, has been claiming 160,000 apps in their store from 45,000 developers. In a recent interviewed, Microsoft officials claimed that the average user downloaded 54 apps, which would put their download count at 650 million to date. While the company does not break out data for its mobile division, looking at the variations in the accounts payable line of their 10-Qs over the last few quarters before and after they introduce their app store shows a variation of up to $100 million since 2011 that could be attributed to the app store.

So taking all that data into account, we can summarize it as follows:


Google Apple Microsoft
Number of users (in millions) 900 600 12
Number of apps (in thousands) 800 1250 160
Number of developers (in thousands) 150 235 45
Number of downloads (in billions) 48 50 .65
Paid to developers (in millions) 900 5000 100

Looking at this, it is clear that Apple is winning the game in terms of total number of apps and money paid to developers. But lost in the shuffle is how much money developers can actually make on those platforms.

Meager dollars per download

Taking the data in front of us, we can get a sense of how many apps the average developer creates and what kind of revenue a developer can expect from those apps, on average (granted, power laws dictate that a small number of developers will do extremely well while the vast majority will fail but we’re trying to look at averages here).


Google Apple Microsoft
Number of apps per developers 5 5 3
Number of downloads per app 60,000 40,000 4,062
Revenue per download $.01875 $.1 $.1538

Based on this, the average developer on those platform is pretty busy, developing 3 to 5 apps depending on the platform. Interestingly, Android is the big winner on downloads for a given app but this is largely offset by substantially lower revenue, with the average app download bringing in around 2 cents to its developer; Apple fairs 5 times better, bringing in a dime for every one of the 40,000 potential app downloads a developer could strive for; But the interesting thing is that Microsoft’s platform may be substantially more reward to its developers, bringing in $.15 per download (a fact that is offset by download numbers shown to be only 10 percent of what the other platforms can offer).

With the average paid app retailing for $.99, what we see here is the direct impact of free apps on those marketplaces. Android’s substantial lead in offering free apps cuts deeply into the average revenue paid out to developers while the smaller availability of free apps on the Windows platform may work to its advantage.

But what does that mean in terms of actually revenue?

Impact on developers’ wallets

Multiplying the average revenue per app by its average number of downloads, we can get a sense as to what an average developer can expect to make on an app today. Taking the same number and multiplying it by the number of apps an average developer creates, we get a sense of the revenues one can pull from going that way:


Google Apple Microsoft
Average revenue per app $1,125 $4,000 $625
Average revenue per developer $6,000 $21,276 $2,222

What we see here is that while decent amounts of money can be made on an app, a hard-working developer on iOS will be able to get a new car, while Android and Microsoft developers will be forced into the used car market if they plan to take those earnings on the road. At $4,000 in average revenue per app, Apple has a lead but it still begs the question as to how many developers can actually make a living directly from apps on any platforms. Direct revenue from the apps themselves may not be able to justify large development teams but other revenue sources (advertising, for example) may help developers increase their takes.

Of course, these are average values and many hope to find the next big hit, the one that will be on the higher end of median. But for every app that goes north of those numbers, the average for the remaining apps drops.

Where are the opportunities?

There is, however, some hope. While Apple has developed a rich market for developers, Google and Microsoft both have opportunities to improve. For Google, the focus should be on improving the numbers by helping developers monetize their app so they can come more in line with what Apple can offer. Even if they only reached half of what Apple does in terms of revenue per download, they would be able to match the revenue developers make. For Microsoft, the story is all about number of users. If they were to manage their average revenue per app while growing their user base, they could potentially out-earn all the other platforms.

In all three cases, however, there is much work to be made to increase monetization of free apps. And that may be the next growth opportunity for any developer as it presents an opportunity that is substantially larger than the existing one.

Every app developer hopes to build the next big hit but, as is the case for any hit driven market, a large group will be left behind with little to show for their efforts. At the end of the day, the model for mobile app is no different than any other in the past: whether it is gold in the Yukon, websites in the 1990s, or app developers today, larger amount of revenue will go to those who enable development than to those who are doing the development. Levi Strauss made his fortune selling picks and shovels to gold prospectors; Adobe, Amazon, and Google made theirs enabling web developers; And as the mobile revolution takes hold, some new players will emerge providing tools to create apps and those may be the biggest winners.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title Apps Economics: Fool’s Gold?. You can follow Tristan on Twitter at @TNLNYC

]]> 1
After mobile – Smarter cities Sun, 04 Aug 2013 22:00:50 +0000

The original address for this post is After mobile – Smarter cities. If you're reading it on another site, please stop by and visit.

The internet of things is giving our cities a brain.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title After mobile – Smarter cities. You can follow Tristan on Twitter at @TNLNYC


The original address for this post is After mobile – Smarter cities. If you're reading it on another site, please stop by and visit.

Over the last couple of weeks, we’ve looked at what are the next big waves now that the shift from PC to mobile devices is entering a more mature phase. Among the trends we’ve explored so far are how technology companies are looking at the living room and the human body as two big battlegrounds. This week, let’s take a survey of a new phenomenon called the Internet of things and its more specific application to cities.

Started as a concept over a decade ago, the internet of things stems from the idea that every object could eventually be attached to the internet. To date, its richest implementation has been through the use of RFID technology to track inventories in large warehouses and at big box retailers. In the last decades, companies ranging from giants like WalMart to specialty retailers like Prada have used such technology to get a better sense of where their different assets are.

With sensor and networking technologies becoming cheaper, more and more devices are now connecting to the internet (some estimates are that as many as 15 billion physical objects are already connected to the internet) and those devices are broadcasting information into the cloud that makes us understand how to make better use of resources and where more emphasis is needed.

Songdo: Connected from the start

South Korea has long been an early adopter of technology, still boasting the world’s fastest internet network for consumers and dealing with a culture that is more accepting of technological progress, as a whole, than most places. So it is hardly surprising that South Korea would be among the first countries to attempt to mine the power of the internet to create smarter cities. So in 2001, they landfilled some portion of their coast and turned it into Songdo, a high-tech city where people truly live in the future:

Here, the electric infrastructure is completely wired to not only provide the city’s managers with information about where and when most of the current is used but it also provides individual residents some greater level of control over their personal usage. Likewise, roadways have built-in traffic sensors that provide the city’s transportation administration with details on traffic patterns, allowing them to smartly reprogram traffic lights to reduce traffic congestions, accidents, and optimize traffic and pedestrian speeds. These “smart roads” connect over the internet to a central office where they also provide information on weather conditions and can serve as early warning in case of seismic activity (South Korea is prone to earthquakes so such sensors can make a large difference to residents).

Noise, Pollution, and even Parking Spaces

Santander is a mid-size town in Spain. In an effort to reduce air and noise pollution, the city has started looking at technology and has become one of the testbeds for large scale sensors deployments. Through a public/private partnership entitled Smart Santander, the city has deployed around 10,000 electronic monitoring devices. Each devices includes 2 radios to communicate with other devices (create a wireless networks between each devices), a GPS, and a host of sensors to monitor carbon monoxide emission, noise, temperature, ambient light, and whether a car is parked or not in a particular space. Each devices updates data over the internet in real-time, allowing drivers to either use a mobile app or read smart signs to find the next available parking spot.

Using the same technology, Libelium, one of the Spanish startups behind Smart Santander, has expanded into as wide a range of uses as radiation monitoring Japan, traffic restructuring in Spain, and public transportation improvements in Serbia.

Savings with sensors

Meanwhile, in the United States, several municipalities have been adding new monitors to their sewer and water systems, allowing them to save large amounts of money thanks to smarter monitoring. South Bend, Indiana, for example, has reduced wastewater overflows by 23% and entirely eliminated clogged sewers incidents by installing technology from IBM that allows it to aggregate data it gets from all its different agencies and manipulate it to turn it into useful information. The city estimates that it will recover the cost of installing this new technology in less than 2 years.

Further south, in Miami-Dade county (Florida), city administrators estimate they will save $1 million a year and reduce water consumption by 20% through the use of smart sensors that allow them to more quickly repair water leaks. And in Texas, Corpus Christi analyzed almost 4,000 water main breaks and discovered that simply changing the size of some smaller pipes was enough to dramatically reduces such incidents.

Keeping it cool

And it’s not just cities getting involved. In New York city, Con Edison, one of the largest electric utilities, kicked off the CoolNYC program, which provides, free of charge, equipment that New Yorkers can attach to their air conditioning unit in order to help reduce consumptions when heat waves happen (the plugs have temperature sensors and wireless broadcasters that connect to the internet), monitor their energy costs and even schedule when and when to turn on their A/C unit through either the web or a smart phone. Consumers who take part in the program can, for example, leave their AC unit while they are at work then pull out their phone before they head home, arriving to a cooled down house.

The company also monitors usage and, if usage spikes endanger the power grid, which could lead to black-outs, it can selectively remotely turn off A/C units that are sitting in areas where the temperature is lower than that set by the customers. This means that Con Edison can proactively manage a small portion of its electrical grid through smart use of data, ensuring that all customers are served while waste is reduced. Every step of the way, however, it leaves the consumers in charge, letting them choose to selectively turn certain units on or off and allowing them to decide whether they want to allow Con Edison to turn off their AC units. With 6 million window air conditioning units in New York, such a program could have a major impact in how much energy is consumed on hot summer days.

Expect those things to become more common relatively soon, as costs continue to drop and cities and utilities see the value of outfitting their customers with tools that will reduce waste and increase overall quality of life.

is the founder and CEO of Keepskor and writes, where this was initially posted under the title After mobile – Smarter cities. You can follow Tristan on Twitter at @TNLNYC

]]> 0