Category Archives: Internet & Web X.0

The Early Days of Silicon Alley – Some Random Quotes

I’m in the middle of doing some research for a blog posting I’d like to write this weekend and came across some articles written about my favorite zine of the 90s – Silicon Alley Reporter.  For those unfamiliar with SAR, it was the newsletter/magazine that the entire trade read and helped fuel much of the industry throughout it’s birth.  Jason Calacanis was the brainchild behind the magazine which morphed into a magazine called Venture Reporter (2001) and then ultimately sold to Wicks Business Information.

One of the articles I want to share was written in 1998 by the NY Times which featured Silicon Alley Reporter (amongst others).

I like many of the quotes/lines in the article and in no particular order, here they are:

”The industry here is driven by people,” Mr. Calacanis, 27, lectured a magazine job applicant recently over breakfast at Balthazar’s. ”On the West Coast there are a handful of celebrities — Bill Gates, Larry Ellison,” Mr. Calacanis said, referring to the chairman of the world’s two largest software companies. ”Those people are boring. They don’t take good pictures.”

Mr. Calacanis’s irreverence is either refreshing or hopelessly sophomoric, depending on one’s point of view. Yet his shameless boosterism of Silicon Alley — New York’s downtown multimedia hub — has helped create the aura of community and even home-team spirit for a disparate group of colleagues and competitors who otherwise might have little more in common than being located in Manhattan and trying to find some sort of profitable business on the World Wide Web.

‘There are two Silicon Alleys — the one where people go to the parties and the one where people actually do the work,” Mr. Shannon said.

The Alley’s derivative nickname first caught on in 1995, with the explosion of Web and the conviction that its ”content” — the techie catch-all for magazines, newspapers, music, movies, TV, advertising and various vaguely defined interactive stuff — would be produced in the world’s media capital. But of all of those things, only advertising has shown a propensity to make money on the Web. Moreover, the truly successful Web content — at least this month — appears to be the information tools or ”portals” aggregated by major search engines like Yahoo, Excite and Snap, which are all located on the West Coast.

If you’d like to be alerted of the forthcoming full posting, follow me on twitter @dherman76

ESPN Will Lead Live TV into the Next Decade

It’s a bold statement in the title, but I believe that the only traditional television programming to exist in the future are based around live events.  Unless Ticketmaster/Livenation create a live concert channel which I would assume is not out of the realm of possibilities, I think ESPN and it’s associated brothers/sister channels will lead television into the next decade.

It’s just not fun to watch a time shifted sporting event.

Read this article in the Sports Business Journal about ESPN entitled, Industry Wonders Who Will Challenge ESPN?

Another great addition to live television is the use of social media and technology to enhance the experience.  Yes, you can use Hot Potato during an episode of The Hills, but watching a sporting event with updates via tweets/etc is much more enjoyable.  We’ve not even scratched the surface with what is avaialble here, but it’s an area I’m interested in and I’m sure we’ll see lots in the next two years.

In a recent article in Advertising Age, entitled Live TV is Alive as Ever, Boosted by Social Media, author Andrew Hampp talks about the undeniable link between social media buzz and TV ratings.

With a programming schedule dominated by live events, ESPN is in a nice position to win the television war into the next decade.

Note:  the TV war won’t just be on your beautiful 60″ LCD in your living room – it’ll be on multiple screens, including your mobile devices.

Thinking About Content Consumption

I’ve been thinking a lot about content consumption lately.  The main reason for this is because for me, it’s broken.  Sub-optimal.   There is no way I can keep up with everything that is actively passed my way and passively written that I may want to consume.

There are a few reasons for this:

  • Not enough time in the day (which one can argue comes down to prioritization)
  • Content overload – at some point, my brain switches OFF, usually around 7/8pm after a 12 hour day
  • Filtering of content is weak.  Lots of people trying to solve this however
  • Access to content at the *right* time

The first RSS reader I used (and paid) was Newsgator, back in 2005.  I used Newsgator for a while and then switched over to Google Reader for a specific reason – though don’t remember precisely what it was.  I loved Google Reader for a while, but the issue I had with RSS readers was that if I did not check them for a period of time, I almost felt inundated with information and I would just reset all the feeds (mark as read) and start from scratch… though losing out on all of that particular content.  Needless to say, I’ve not used my RSS reader in the past 8 months or so.

I’ve recently been using My6Sense for my iPhone and happen to like what they are doing.  They are using something that they call Digital Intuition (their secret sauce filtering wizbang) to filter out articles that they think I may not like and it’s working so far.

Being that I have seeded a recent company within the content consumption space, I’d like to share you with a few thoughts which I hope you riff/expand/debate:

  • Screen Space (i.e. your laptop screen, netbook screen, iPhone, etc) is not used optimally.  Why should 1 application dominate the entire screen.  There are times when this is needed (i.e. spreadsheets, etc) but imagine dedicating a portion of your screen to “content”
  • Content consumption can be done two ways:  passively & actively.    Many people are trying to solve content consumption through active means but there is certainly opportunity around passive consumption.  It’s not OR, but rather, “AND.”  I think the killer content consumption application is passive AND active.
  • Filtering content is not easy.  Active filtering such as Pandora, passive filtering such as Amazon has not been perfected though many users expect this.
  • The reason for content consumption is not always logical or rationale, which throws off an intelligence engine/filtering product.   Some of the solid methods account for this, but you would be surprised how many do not.

An interesting company to watch in the space that just emerged is SocialVisor.  They released a ticker-like service (not dissimilar interface to what I just seeded) that is a dumbed down Tweetdeck.  Nicely done.

I continue to think there is room to aid for consumption.

  • There are 27MM tweets per day, 126MM blogs, 234MM websites and 350MM people on Facebook.  No shortage of content being created. (stats here)
  • There is over $500bln dollars worth of marketcap chasing the content indexing game.
  • Who is chasing content consumption?  (me!)

Opening My Wallet For Media Consumption

Being that I work in the digital media industry, there is rarely a day that goes by without some conversation around free vs. paid.  Some folks in the office say that they would gladly pay ~$20 for the NY Times online and others argue that they won’t pay anything because eventually they will hear the news anyway.  I think this argument is less about actually paying for the news but more about paying for timely access to it.  If you could read the news *first,* shouldn’t there be a price to that?

Much of these conversations are being sparked by the reports of the NY Times Online putting up some sort of paywall.

Here is what I think they should do:

Go to a pay model, but the pay model should be for the first 12 hours of any article/story.  Meaning, if the Times publishes an article about the latest angel investment I made, then for the first 12 hours of that article’s existence on the Times online, it would only be accessible by subscribers.

Why I think this works:

  1. Online subscription model generates revenue for the NY Times
  2. Stories go “public” after 12 hours, which allows for page consumption (and views) beyond the subscriber base thus increasing inventory for the NY Times sales force to sell against (advertising)
  3. Online subscription model creates a velvet rope, less-so about actually consuming the content, but more-so about recency of consumption (all about getting it first)

What are your thoughts?  Think this can work?  I think this model can move well beyond the NY Times, no?

My Twitter Tipping Point

One of my tweets today was:

TweetI got a few immediate responses from @adventurista, @jbguru, and @mediahorizons asking me what the tipping point was.  My tipping point was when I started exploring the apps ecosystem surrounding Twitter to fully understand what a platform is, and thought I’d use this post as a way that I’ve navigated the waters.

I joined twitter 1026 days ago (March 19, 2007) which was during SXSW 2007.  My first tweet is below:

My first tweet

I did not really know what Twitter really was- other than a way for the TechCrunch crowd to communicate back and forth with each other and in some ways, use it as an ego machine.   @HowardLindzon always put smiles on my face with his obnoxious and ridiculous tweets and in stark contrast to Howard, @andrewparker was sharing IMHO very interesting insight and links.    Twitter became a firehose of content, so controlling who I was following was critical.

Fast forward to today, I am following around 195 people.  While I probably would follow more than 195 people, the firehose of content becomes so great that there is no way I can keep up with everyone.   Only today are the tools being built to help filter and manage the billions of tweets.

I very rarely use as the source of where I write my tweets.   Only 1 in my last 20 tweets (5%) were written at Twitter but the majority are written from a communications platform such as TweetDeck, Seesmic, and mobile versions of TweetDeck and Tweetie.  The reason why I use these communications platforms are because they help me quickly navigate the content by directing me to all of my friends content, direct messages, and mentions and recently, I setup ways to keep tracking of certain keywords to see what people are saying.  It’s a very simple social monitoring tool.  A few keywords I’m tracking today are KBS+P, Cliqset and Snackr through programs like TweetDeck and Squawk.

As I deepen my experience with Twitter, the more I understand the ecosystem and how multi-dimensional it has become.

I have not seen statistics as to how many people are using these communications platforms but being that TweetDeck is being mentioned at BestBuy for an Interscope promotion, I can imagine that there have been quite a few downloads.

To fully understand Twitter as a platform, you need to dig deeper into the developer movement.  I’ve been spending some time with some developers recently and want to highlight one or two which have really helped me understand the capabilities.

Meet – Kevin Marshall, a.k.a. Falicon

Kevin has been building a few apps around Twitter under the name with partner Whitney McNamara.   Think of as a collection of apps which utilize the manipulation of twitter and other content (accessed thru an API) to provide value to its users. has started with Twitter because it’s the most easily accessible and has scale.

ConversationaList is an application that is a Twitter list of the people that you talk to (and about) on Twitter. The list is automatically updated daily, so that it always reflects the people that you are paying attention to right now. If you @reply (or @mention) someone, they’re added to your list. If you stop talking to that person, they drop off your list.

This provides lots of value as it helps me navigate my personal firehose and allows me to find very relevant information.

Another app they have built out is Hivemind.

Hivemind shows you who you’re missing on Twitter. Give hivemind up to five Twitter users that interest you, and it will report back on who those people as a group are all following that you aren’t.

If you respect a few folks and want to find out who they are all following that you are not?  This is a great way to add folks to your following list.   You are using people as “curators.”

So the above two examples are from which tend to use people’s following/followers lists as proxies to analyze data from, but there are other people building some other projects.

Unmasking Masked Links

Check out TweetMeme.   If you are familiar with Digg, then you’ll quickly understand TweetMeme.  Their tagline is “check out the top links on Twitter.”  They break down the links by a few different categories as well, so navigation through these links become easier.  This is a nice compliment to TechMeme and worth a check every so often.

A project that I came across is called, a name I don’t understand but useful nonetheless.  I had this idea as well, but Dan Lewis actually took the steps to build this site.  Bitme pulls the most clicked links across (though not all links are on Twitter) confined to a curated list of sites.

Example:  For BitMe’s Technology section, he is pulling the top clicked links from Mashable, TechCrunch, ReadWriteWeb, and a few other sites.  Extremely helpful.

Twitter In Itself is “Dumb”

It’s not dumb in the sense that I’m not going to use it, it’s dumb in the sense that it is much more intelligent when your interface with it through the apps.  That in itself is a definition of a platform.   I believe that Twitter will ultimately succeed if remains a platform and allows developers to continue innovating through the APIs.  If Twitter starts to make acquisitions that limits the amount of companies accessing the APIs, then the pace of innovation may slow.  ReadWriteWeb had a great article about Twitter’s API rate change that could lead to significant innovation and I’m all for it.

As mentioned, my tipping point was when I started exploring the apps ecosystem surrounding Twitter.  This tipping point is the true understanding of what a platform really is.  If you are inspired and want to learn more, check out John Borthwick‘s Charting the Real Time Web.  If the ecosystem of the real time web is inspiring to you, check out Betawork’s network of companies to see how the space is playing out.

From me, I’ve made a recent investment in this space (company not listed in this post) that should surface in February/March 2010.   I’ve been inspired by streams of content and believe there are simple ways to consume content that can be leveraged by the masses.  Look out for more posts like this one, and yes, I’ll be tweeting about it too.  You can follow me on twitter here.

In The Data Decade, There Should Be Data-Driven Acquisitions

I’ve been immersing myself into the world of data for the past few years at work, reading lots of books, and speaking to many gurus.  What interests me both personally and professionally is the application of large data sets and how to use them to gain a competitive advantage over the competition.  Call it data arbitrage, if you will.

During my commute this morning while driving down the West Side Highway, I was think that if I was a big corporation, which companies would I acquire and why – purely for their data assets.  Here are a few, I’ve obviously not thought of everything, so please chime in- in the comments section.

Also, please note that solid business strategy does not mean that these need to be acquisitions, but potential strategic relationships, tactical biz dev, and partnerships.  I have them listed as acquisitions but understand that not all of them should be.

Record Labels (Warner, EMI, Sony, Universal) acquire Pandora, MOG, and Spotify

I do not think that the labels need to acquire all of the above, but at least one.  The reason?  Why not own the data that shows what listeners are listening to around the world? (Audioscrobbling)  By having these, you can do a few things for the label:  cut down on A&R spend as you can find artists easier, view music consumption trends as you can see the types of music and their intricacies that are popular, and also, help with distribution of music/tours as you can see what artists/genres of music are popular in different market and make sure that the artist visits that region or sells their merch.

Professional Sports teams acquire fantasy sports research companies (The Huddle, FF Today) and/or leagues themselves (i.e. CBS Sports Fantasy Football)

If we believe that the wisdom of the crowd is smarter than a few humans, then why wouldn’t NFL Scouts want access to college football fantasy sports data – i.e. how many “fantasy GM’s” owned certain players, their value, etc.  I’m sure there is some value in all of the data that fantasy sports generate and the professional teams can really benefit as they are paying outrageous salaries to these particular players.

A Hedge Fund Acquires Wikipedia

Wikipedia has tons of research but what is probably most interesting to me (of which I do not have access to) is which articles/topics on Wikipedia are trending.  If someone knew which were the highest read articles and which articles were trending in near-real-time, then investors can make big decisions about where they should place their bets.  If articles about South African soccer balls are trending, then maybe an acquisition/investment can be made within the country of which these queries are being made from.

How about some others?  I know I had a few others in my head but forgot them by the time I sat down to write this.

This Decade Will Welcome The End of The Branding Campaign

Yes, that’s right.  For someone who has spent the past few years inside of a very well known marketing agency, I am calling this decade the death of the pure branding campaign.  This was inspired by the comments in Fred Wilson’s blog post, Affiliate Marketing Undervalue’s the Link.

Before I proceed, let me preface that this is my personal blog and these are my thoughts, which may or may not be reflected by my employer.

A branding campaign is a proxy for “we do not have or have access to the right measurement tools to substantiate a fully measured campaign.”  Think about it.  Hollywood typically uses “branding” campaigns to launch movies because there is very little technology out there that can report back to say how many people actually saw CommercialX and went to Loews to purchase a ticket this passed Saturday.

There is $250bln spent in measured US advertising each year and we do not have the tools to adequately measure the entire marketing campaign.  Something seems off, right?

If we look to Wall Street for guidance on where importance is placed, goodwill (brand) is recognized on the balance sheet but at the end of the day, earnings is most important.  Google’s goodwill is a lot less than it’s operating income, which is where Wall Street tends to place it’s bets and the stock is performing well.

With technology penetrating the advertising ecosystem – i.e. Ad Servers, Databases, Optimization Engines, Bid Management Platforms, one would think that we’re closer to a measurable ecosystem.  We can only get to the end state if all of our media channels are digital (not necessarily laptop or desktop driven) as we will be able to measure and analyze.

As television (largest medium for ad dollars), print (Kindle, Nook, etc), radio (satellite, HDD), and OOH (digital-OOH) are all becoming digitized, then we can get closer to measuring campaign success.

What many of us deem as extremely important in both optimization and conversion is the actual path to conversion.  Atlas and DART all have special names for this but lets use Engagement Mapping.   If you are exposed to 7 different advertisements across multiple channels and you convert after the 7th, then generally the last exposure gets all of the credit.

Where this model breaks is upon any non-measured exposure component (today:  tv, print, radio, ooh, and sometimes search, yes, search if it’s not part of the database).  This is why we need all channels to have a digital backbone or be measured by one (and one without a biased panel set).  Lets assume this is fixed (big assumption) and we can measure all the way through from first exposure to conversion (and post).

If we are at this end-state, THEN why do we need a branding campaign?   Why should we not include a full call-to-action on each piece of media that drives the user to take some action and to properly associate value with each?

It may take 10 years to get us there, but at least it will be in this new decade.  If this happens, two radical things will occur:

  1. John Wannamaker will roll over in his grave as we will have figured out which part of his ad-spend is working for him.
  2. Marketers may realize that they are overspending or underspending with their media dollars.

1990s – playing around and innovating
2000s – making things work, starting to track and monetize
2010s – realizing the potential and investing in the infrastructure to make this happen

One could argue that you need awareness before you can get to conversion.  Yes, that’s correct, but you can go from instant awareness to conversion much faster now.  We also know that consumers enter a purchase funnel from many different places and that some arrive at the funnel much lower than the awareness stage.  Just go with it.

If you can understand the technology infrastructure needed to make this all happen, then you can understand why I love where advertising & marketing is headed.  Happy holidays – @dherman76

Visualizing TechCrunch, GigaOm, and Mashable

There is no lack of content online that I’m particularly interested in.  The issue is not finding the content, but rather it is consuming the content in a reasonable amount of time while I try to balance family life and all of the other joys in my life.

This morning I ran an experiment:  use Wordle to visualize TechCrunch, GigaOm, and Mashable by their RSS/Atom feed.   This is part of a larger experiment that I’m pursuing but that story is for a later date.  What I find particularly interesting is that you can quickly see the distinctions between their reporting; as an example, GigaOm focusing more on tech (VMWare, infrastructure) where Mashable covers lots of consumer web (Dorthy launch, WordPress).

TechCrunch – Content Cloud (below)


Mashable – Content Cloud (below)

Picture 8

GigaOm – Content Cloud (below)

GigaOm Cloud

2010 Predictions & Trends "Cloud"

I aggregated 25+ trends & predictions for 2010 centered around technology, media, and advertising last night (see them here) and created a “wordle” based on them.  See below.  You can obviously see that Facebook, Mobile, Google, Social, and Twitter are popular.

Some not-so-big (in terms of volume) but still important trends and predictions that appeared:  data, cloud, devices and acquisition.

Blogs/sites that I aggregated include but are not limited to Mashable, Read Write Web, Clickz, AdAge, Alley Insider, Center Networks, GigaOm, WSJ, MediaPost, CNBC, Adotas, and many, many more.

happy new year – @dherman76

Trends Cloud Large

2010 Predictions & Trends: Technology, Media & Marketing

I created an uber prediction list back in 2007 and it was a hit.  Here are some of the predictions already posted across the net re: technology, media, and marketing – three things I’m very interested in.  If I missed your post and you’d like to be listed, just leave a comment and I’ll add you.