Category Archives: Internet & Web X.0

Thinking About Content Consumption

I’ve been thinking a lot about content consumption lately.  The main reason for this is because for me, it’s broken.  Sub-optimal.   There is no way I can keep up with everything that is actively passed my way and passively written that I may want to consume.

There are a few reasons for this:

  • Not enough time in the day (which one can argue comes down to prioritization)
  • Content overload – at some point, my brain switches OFF, usually around 7/8pm after a 12 hour day
  • Filtering of content is weak.  Lots of people trying to solve this however
  • Access to content at the *right* time

The first RSS reader I used (and paid) was Newsgator, back in 2005.  I used Newsgator for a while and then switched over to Google Reader for a specific reason – though don’t remember precisely what it was.  I loved Google Reader for a while, but the issue I had with RSS readers was that if I did not check them for a period of time, I almost felt inundated with information and I would just reset all the feeds (mark as read) and start from scratch… though losing out on all of that particular content.  Needless to say, I’ve not used my RSS reader in the past 8 months or so.

I’ve recently been using My6Sense for my iPhone and happen to like what they are doing.  They are using something that they call Digital Intuition (their secret sauce filtering wizbang) to filter out articles that they think I may not like and it’s working so far.

Being that I have seeded a recent company within the content consumption space, I’d like to share you with a few thoughts which I hope you riff/expand/debate:

  • Screen Space (i.e. your laptop screen, netbook screen, iPhone, etc) is not used optimally.  Why should 1 application dominate the entire screen.  There are times when this is needed (i.e. spreadsheets, etc) but imagine dedicating a portion of your screen to “content”
  • Content consumption can be done two ways:  passively & actively.    Many people are trying to solve content consumption through active means but there is certainly opportunity around passive consumption.  It’s not OR, but rather, “AND.”  I think the killer content consumption application is passive AND active.
  • Filtering content is not easy.  Active filtering such as Pandora, passive filtering such as Amazon has not been perfected though many users expect this.
  • The reason for content consumption is not always logical or rationale, which throws off an intelligence engine/filtering product.   Some of the solid methods account for this, but you would be surprised how many do not.

An interesting company to watch in the space that just emerged is SocialVisor.  They released a ticker-like service (not dissimilar interface to what I just seeded) that is a dumbed down Tweetdeck.  Nicely done.

I continue to think there is room to aid for consumption.

  • There are 27MM tweets per day, 126MM blogs, 234MM websites and 350MM people on Facebook.  No shortage of content being created. (stats here)
  • There is over $500bln dollars worth of marketcap chasing the content indexing game.
  • Who is chasing content consumption?  (me!)

Opening My Wallet For Media Consumption

Being that I work in the digital media industry, there is rarely a day that goes by without some conversation around free vs. paid.  Some folks in the office say that they would gladly pay ~$20 for the NY Times online and others argue that they won’t pay anything because eventually they will hear the news anyway.  I think this argument is less about actually paying for the news but more about paying for timely access to it.  If you could read the news *first,* shouldn’t there be a price to that?

Much of these conversations are being sparked by the reports of the NY Times Online putting up some sort of paywall.

Here is what I think they should do:

Go to a pay model, but the pay model should be for the first 12 hours of any article/story.  Meaning, if the Times publishes an article about the latest angel investment I made, then for the first 12 hours of that article’s existence on the Times online, it would only be accessible by subscribers.

Why I think this works:

  1. Online subscription model generates revenue for the NY Times
  2. Stories go “public” after 12 hours, which allows for page consumption (and views) beyond the subscriber base thus increasing inventory for the NY Times sales force to sell against (advertising)
  3. Online subscription model creates a velvet rope, less-so about actually consuming the content, but more-so about recency of consumption (all about getting it first)

What are your thoughts?  Think this can work?  I think this model can move well beyond the NY Times, no?

My Twitter Tipping Point

One of my tweets today was:

TweetI got a few immediate responses from @adventurista, @jbguru, and @mediahorizons asking me what the tipping point was.  My tipping point was when I started exploring the apps ecosystem surrounding Twitter to fully understand what a platform is, and thought I’d use this post as a way that I’ve navigated the waters.

I joined twitter 1026 days ago (March 19, 2007) which was during SXSW 2007.  My first tweet is below:

My first tweet

I did not really know what Twitter really was- other than a way for the TechCrunch crowd to communicate back and forth with each other and in some ways, use it as an ego machine.   @HowardLindzon always put smiles on my face with his obnoxious and ridiculous tweets and in stark contrast to Howard, @andrewparker was sharing IMHO very interesting insight and links.    Twitter became a firehose of content, so controlling who I was following was critical.

Fast forward to today, I am following around 195 people.  While I probably would follow more than 195 people, the firehose of content becomes so great that there is no way I can keep up with everyone.   Only today are the tools being built to help filter and manage the billions of tweets.

I very rarely use Twitter.com as the source of where I write my tweets.   Only 1 in my last 20 tweets (5%) were written at Twitter but the majority are written from a communications platform such as TweetDeck, Seesmic, and mobile versions of TweetDeck and Tweetie.  The reason why I use these communications platforms are because they help me quickly navigate the content by directing me to all of my friends content, direct messages, and mentions and recently, I setup ways to keep tracking of certain keywords to see what people are saying.  It’s a very simple social monitoring tool.  A few keywords I’m tracking today are KBS+P, Cliqset and Snackr through programs like TweetDeck and Squawk.

As I deepen my experience with Twitter, the more I understand the ecosystem and how multi-dimensional it has become.

I have not seen statistics as to how many people are using these communications platforms but being that TweetDeck is being mentioned at BestBuy for an Interscope promotion, I can imagine that there have been quite a few downloads.

To fully understand Twitter as a platform, you need to dig deeper into the developer movement.  I’ve been spending some time with some developers recently and want to highlight one or two which have really helped me understand the capabilities.

Meet wow.ly – Kevin Marshall, a.k.a. Falicon

Kevin has been building a few apps around Twitter under the wow.ly name with partner Whitney McNamara.   Think of wow.ly as a collection of apps which utilize the manipulation of twitter and other content (accessed thru an API) to provide value to its users.  wow.ly has started with Twitter because it’s the most easily accessible and has scale.

ConversationaList is an application that is a Twitter list of the people that you talk to (and about) on Twitter. The list is automatically updated daily, so that it always reflects the people that you are paying attention to right now. If you @reply (or @mention) someone, they’re added to your list. If you stop talking to that person, they drop off your list.

This provides lots of value as it helps me navigate my personal firehose and allows me to find very relevant information.

Another app they have built out is Hivemind.

Hivemind shows you who you’re missing on Twitter. Give hivemind up to five Twitter users that interest you, and it will report back on who those people as a group are all following that you aren’t.

If you respect a few folks and want to find out who they are all following that you are not?  This is a great way to add folks to your following list.   You are using people as “curators.”

So the above two examples are from wow.ly which tend to use people’s following/followers lists as proxies to analyze data from, but there are other people building some other projects.

Unmasking Masked Links

Check out TweetMeme.   If you are familiar with Digg, then you’ll quickly understand TweetMeme.  Their tagline is “check out the top links on Twitter.”  They break down the links by a few different categories as well, so navigation through these links become easier.  This is a nice compliment to TechMeme and worth a check every so often.

A project that I came across is called Bitme.me, a name I don’t understand but useful nonetheless.  I had this idea as well, but Dan Lewis actually took the steps to build this site.  Bitme pulls the most clicked links across Bit.ly (though not all links are on Twitter) confined to a curated list of sites.

Example:  For BitMe’s Technology section, he is pulling the top clicked bit.ly links from Mashable, TechCrunch, ReadWriteWeb, and a few other sites.  Extremely helpful.

Twitter In Itself is “Dumb”

It’s not dumb in the sense that I’m not going to use it, it’s dumb in the sense that it is much more intelligent when your interface with it through the apps.  That in itself is a definition of a platform.   I believe that Twitter will ultimately succeed if remains a platform and allows developers to continue innovating through the APIs.  If Twitter starts to make acquisitions that limits the amount of companies accessing the APIs, then the pace of innovation may slow.  ReadWriteWeb had a great article about Twitter’s API rate change that could lead to significant innovation and I’m all for it.

As mentioned, my tipping point was when I started exploring the apps ecosystem surrounding Twitter.  This tipping point is the true understanding of what a platform really is.  If you are inspired and want to learn more, check out John Borthwick‘s Charting the Real Time Web.  If the ecosystem of the real time web is inspiring to you, check out Betawork’s network of companies to see how the space is playing out.

From me, I’ve made a recent investment in this space (company not listed in this post) that should surface in February/March 2010.   I’ve been inspired by streams of content and believe there are simple ways to consume content that can be leveraged by the masses.  Look out for more posts like this one, and yes, I’ll be tweeting about it too.  You can follow me on twitter here.

In The Data Decade, There Should Be Data-Driven Acquisitions

I’ve been immersing myself into the world of data for the past few years at work, reading lots of books, and speaking to many gurus.  What interests me both personally and professionally is the application of large data sets and how to use them to gain a competitive advantage over the competition.  Call it data arbitrage, if you will.

During my commute this morning while driving down the West Side Highway, I was think that if I was a big corporation, which companies would I acquire and why – purely for their data assets.  Here are a few, I’ve obviously not thought of everything, so please chime in- in the comments section.

Also, please note that solid business strategy does not mean that these need to be acquisitions, but potential strategic relationships, tactical biz dev, and partnerships.  I have them listed as acquisitions but understand that not all of them should be.

Record Labels (Warner, EMI, Sony, Universal) acquire Pandora, MOG, and Spotify

I do not think that the labels need to acquire all of the above, but at least one.  The reason?  Why not own the data that shows what listeners are listening to around the world? (Audioscrobbling)  By having these, you can do a few things for the label:  cut down on A&R spend as you can find artists easier, view music consumption trends as you can see the types of music and their intricacies that are popular, and also, help with distribution of music/tours as you can see what artists/genres of music are popular in different market and make sure that the artist visits that region or sells their merch.

Professional Sports teams acquire fantasy sports research companies (The Huddle, FF Today) and/or leagues themselves (i.e. CBS Sports Fantasy Football)

If we believe that the wisdom of the crowd is smarter than a few humans, then why wouldn’t NFL Scouts want access to college football fantasy sports data – i.e. how many “fantasy GM’s” owned certain players, their value, etc.  I’m sure there is some value in all of the data that fantasy sports generate and the professional teams can really benefit as they are paying outrageous salaries to these particular players.

A Hedge Fund Acquires Wikipedia

Wikipedia has tons of research but what is probably most interesting to me (of which I do not have access to) is which articles/topics on Wikipedia are trending.  If someone knew which were the highest read articles and which articles were trending in near-real-time, then investors can make big decisions about where they should place their bets.  If articles about South African soccer balls are trending, then maybe an acquisition/investment can be made within the country of which these queries are being made from.

How about some others?  I know I had a few others in my head but forgot them by the time I sat down to write this.

This Decade Will Welcome The End of The Branding Campaign

Yes, that’s right.  For someone who has spent the past few years inside of a very well known marketing agency, I am calling this decade the death of the pure branding campaign.  This was inspired by the comments in Fred Wilson’s blog post, Affiliate Marketing Undervalue’s the Link.

Before I proceed, let me preface that this is my personal blog and these are my thoughts, which may or may not be reflected by my employer.

A branding campaign is a proxy for “we do not have or have access to the right measurement tools to substantiate a fully measured campaign.”  Think about it.  Hollywood typically uses “branding” campaigns to launch movies because there is very little technology out there that can report back to say how many people actually saw CommercialX and went to Loews to purchase a ticket this passed Saturday.

There is $250bln spent in measured US advertising each year and we do not have the tools to adequately measure the entire marketing campaign.  Something seems off, right?

If we look to Wall Street for guidance on where importance is placed, goodwill (brand) is recognized on the balance sheet but at the end of the day, earnings is most important.  Google’s goodwill is a lot less than it’s operating income, which is where Wall Street tends to place it’s bets and the stock is performing well.

With technology penetrating the advertising ecosystem – i.e. Ad Servers, Databases, Optimization Engines, Bid Management Platforms, one would think that we’re closer to a measurable ecosystem.  We can only get to the end state if all of our media channels are digital (not necessarily laptop or desktop driven) as we will be able to measure and analyze.

As television (largest medium for ad dollars), print (Kindle, Nook, etc), radio (satellite, HDD), and OOH (digital-OOH) are all becoming digitized, then we can get closer to measuring campaign success.

What many of us deem as extremely important in both optimization and conversion is the actual path to conversion.  Atlas and DART all have special names for this but lets use Engagement Mapping.   If you are exposed to 7 different advertisements across multiple channels and you convert after the 7th, then generally the last exposure gets all of the credit.

Where this model breaks is upon any non-measured exposure component (today:  tv, print, radio, ooh, and sometimes search, yes, search if it’s not part of the database).  This is why we need all channels to have a digital backbone or be measured by one (and one without a biased panel set).  Lets assume this is fixed (big assumption) and we can measure all the way through from first exposure to conversion (and post).

If we are at this end-state, THEN why do we need a branding campaign?   Why should we not include a full call-to-action on each piece of media that drives the user to take some action and to properly associate value with each?

It may take 10 years to get us there, but at least it will be in this new decade.  If this happens, two radical things will occur:

  1. John Wannamaker will roll over in his grave as we will have figured out which part of his ad-spend is working for him.
  2. Marketers may realize that they are overspending or underspending with their media dollars.

Decades:
1990s – playing around and innovating
2000s – making things work, starting to track and monetize
2010s – realizing the potential and investing in the infrastructure to make this happen

Caveat:
One could argue that you need awareness before you can get to conversion.  Yes, that’s correct, but you can go from instant awareness to conversion much faster now.  We also know that consumers enter a purchase funnel from many different places and that some arrive at the funnel much lower than the awareness stage.  Just go with it.

If you can understand the technology infrastructure needed to make this all happen, then you can understand why I love where advertising & marketing is headed.  Happy holidays – @dherman76

Visualizing TechCrunch, GigaOm, and Mashable

There is no lack of content online that I’m particularly interested in.  The issue is not finding the content, but rather it is consuming the content in a reasonable amount of time while I try to balance family life and all of the other joys in my life.

This morning I ran an experiment:  use Wordle to visualize TechCrunch, GigaOm, and Mashable by their RSS/Atom feed.   This is part of a larger experiment that I’m pursuing but that story is for a later date.  What I find particularly interesting is that you can quickly see the distinctions between their reporting; as an example, GigaOm focusing more on tech (VMWare, infrastructure) where Mashable covers lots of consumer web (Dorthy launch, WordPress).

TechCrunch – Content Cloud (below)

TechCrunch

Mashable – Content Cloud (below)

Picture 8

GigaOm – Content Cloud (below)

GigaOm Cloud

2010 Predictions & Trends "Cloud"

I aggregated 25+ trends & predictions for 2010 centered around technology, media, and advertising last night (see them here) and created a “wordle” based on them.  See below.  You can obviously see that Facebook, Mobile, Google, Social, and Twitter are popular.

Some not-so-big (in terms of volume) but still important trends and predictions that appeared:  data, cloud, devices and acquisition.

Blogs/sites that I aggregated include but are not limited to Mashable, Read Write Web, Clickz, AdAge, Alley Insider, Center Networks, GigaOm, WSJ, MediaPost, CNBC, Adotas, and many, many more.

happy new year – @dherman76

Trends Cloud Large

2010 Predictions & Trends: Technology, Media & Marketing

I created an uber prediction list back in 2007 and it was a hit.  Here are some of the predictions already posted across the net re: technology, media, and marketing – three things I’m very interested in.  If I missed your post and you’d like to be listed, just leave a comment and I’ll add you.

My Friend David's New Book – Releasing on Dec 31 – Semantic Web

I met David at a dinner hosted by a MD of a hedge fund a few years back who happens to be one of the most connected digital guys.   Little did I know, I was sitting next to the guru of the early web design books of the 90s of which I had 2 of them on my shelf at home.

I maintained friendship with David beyond the dinner and have gotten coffee over the years to chat about his entrepreneurial engagements, chocolate, roller skating, candidacy to become Apple’s next CEO, and of course, children (he’s also a father of a young child).

David’s big day is tomorrow, when Portfolio Hardcover is releasing his next book entitled Pull:  The Power of the Semantic Web to Transform Your Business.  The title is a mouthful and a bit boring but if I know David, the book will certainly impress.

Per his book description:  Pull is the blueprint to the next disruptive wave. Some call it Web 3.0; others call it the semantic web. It’s a fundamental transition from pushing information to pulling, using a new way of thinking and collaborating online. Using the principles of this book, you will slash 5-20 percent off your bottom line, make your customers happier, accelerate your industry, and prepare your company for the twenty-first century. It isn’t going to be easy, and you don’t have any choice. By 2015, your company will be more agile and your processes more flexible than you ever thought possible.

For those who are interested in learning about the semantic web further, you can purchase his book on Amazon.

Other ways to learn about the semantic web:  wikipedia, w3c, semanticweb.org, avc (fred wilson), oreilly

Apple: Ads Coming to OSX – Get Paid to Use the Computer

I received a very timely email from my friend David Siegel pointing me (and a very small group of recipients) to Google’s Patent search database.  On October 22, 2009, Steven Jobs (et al) received patent US2009/0265214A1, Advertisement in Operating System which was originally filed of April 2008.

The abstract of the patent reads:

Among other disclosures, an operating system presents one or more advertisements to a user and disables one or more functions while the advertisements is being presented.  At the end of the advertisement, the operating system again enables the function(s).  The advertisement can be visual or audible.  The presentation of the advertisement(s) can be made part of an approach where the user obtains a good or service, such as the operating system, for free or reduced cost.

A few things to point out:

  • To re-state the obvious:  Apple is exploring ways to provide their OS or other services/products for free or a reduced cost
  • Apple is looking at both audio and visual (can probably expect video to be in here) types of ads
  • The user is forced to watch an ad per the below statement “disables one or more functions while advertisement is being presented”
  • The screenshots in the patent filing show an OSX desktop, not an iPhone screen.

Where can Apple roll this out?

  • I initially gravitated towards thinking that my next MacBook or iMac will be probably be ad-supported but now I’m thinking that it might be my next iPhone or even the increasingly popular (still not present) Apple tablet.  With increasing pressure from Google moving into the mobile space and recent acquisitions in which both parties (Apple & Google) were at the table (i.e. Admob), advertisements could help drive down the price (or subsidize) the service of a cellphone or Tablet.

Media Distribution

  • The opportunity for Apple to become a media distribution hub could be tremendous.  Think about how many Hollywood films or video game trailers they could distribute through their desktop advertising network.

Using advertisements to subsidize a service or make it completely free is not new.  This patent however is potentially important to the industry as Apple looks to future ways to monetize it’s platforms with increasing pressure from competition that has no or very low-cost.

Dell and other PC makers sell advertisements/distribution to companies to place their applications on the desktop of a new computer.  Ever wonder why those icons are there when you open up your brand new machine?  Business development deals place them there and pay-for-distribution.

AllAdvantage was a tool that users downloaded from the Internet during late ’99 in which compensated users for browsing the web.  They made popular their tagline, “Get Paid to Surf the Web.”

Will Apple make famous, “Get Paid to Use the Computer”?