The Trouble with Web 2.0

December 23, 2006

Once upon a time, publishing webpages was solely the domain of a relatively select few. Those who had the ability to code in HTML, who knew how to use FTP to upload files and who had access to space on a webserver connected to the Internet. A decade ago, GeoCities was one of the first sites to offer free webspace for the general public to post their own pages. Many, many bad pages were produced, mainly because you still needed technical skills and ultimately, it was a sea of static pages providing one-way communication. And just because you had technical skills, it didn’t mean you also had writing and layout skills.

Skip forward to late-2004 when the term Web 2.0 was first used. A new wave of dynamic and totally interactive websites was introduced and the previous travellers of the information superhighway could all suddenly become consultants to and constructors of it. Wikipedia introduced the concept of a free on-line encyclopedia with hundreds of thousands of contributors and reviewers. MySpace offers social networking with an interactive, user-submitted network of friends, personal profiles, blogs, photos, music, videos and groups. Sites like del.icio.us are a social bookmarking phenomenon and have the power to direct large numbers of visitors to websites through a quick and simple recommendation system. Sites and services like these are increasing the generation of content on the web exponentially, simply by giving everyone the ability to easily contribute.

The trouble with Web 2.0 is that many new contributors have little consideration of laws and ethics and the governance of many nations has no comprehension of the implications of Web 2.0. For example, a few years ago, the band Metallica came down very heavily on peer-to-peer sharing networks like Napster for illegal distribution of their music. In the scheme of things, Napster users were a drop in the bucket compared to YouTube. This free site contains videos contributed by anyone and viewable by everyone, and many of Metallica’s video clips and live performances are neatly catalogued. While their Terms of Use specifically state that the uploading of copyrighted material is not permitted, the worse thing that will happen is the video will be removed as soon as it’s identified. Problem is, with 65,000 videos being posted each day, finding them all is not a simple task. So YouTube is presently a minefield of copyrighted videos – but even that didn’t stop Google from paying US$1.65B to acquire the company. Worse still, it’s a place where kids can post their pranks, shot with their mobile phone cameras. You like to destroy displays in a supermarket? Get your mates to video it, post it on YouTube and you’ve not only got a worldwide audience, but a host of mimickers to idolize and emulate your feats across the globe. Sadly, there are also videos of school playground bashings and fights.

Social networking sites like MySpace and Bebo are aimed directly at younger people and often at children. While it’s great that children can express themselves and have a voice in front of a wide audience, it’s the more mature concepts of privacy, decency and respect that are often lacking in their posts. Further, it’s the legal concepts of copyright infringement, defamation and incitement that are easy to forget in the world of Web 2.0. Why is it possible to so easily and publicly identify, defame and slander a man on a site like Don’t Date Him Girl! without any evidence to back it up? Why can students edit Wikipedia and Bebo entries about their school to include disparaging comments about teachers and other students?

The most common way that schools around the world are managing this problem is by filtering (blocking) access to many Web 2.0 sites at school. OK, that keeps the problem out of the school (assuming the children haven’t worked out how to circumvent the filters), but it does nothing to stop the problem at home. Laws are also ill-equipped to manage the problems of Web 2.0. What if the poster is a minor? What if the service is hosted in another country? What lesson will be learnt if the only repercussions are that the offending post will be removed – sometime after it has been found and reported?

So what’s needed? I think governments, schools and parents need to be more open-minded about the social-networking phenomenon for a start. We need to stop managing the posts and start managing the people who post. We need to update the age-old difference between right and wrong to mould it into a Web 2.0 environment. It’s not about exclusion, it’s about teaching respect and consideration and responsible self-publishing. It’s about teaching people to think critically in all aspects of life and it all needs to be backed up with appropriate, enforceable guidelines and laws.

Finally, yes, I accept the irony of writing about the Problems of Web 2.0 by using a Web 2.0 application. ) And there’s always the problem of what people might add to the comments section! ;)


The Habits of Highly Effective Web 2.0 Sites

December 2, 2006

The next Web 2.0 Conference will be upon us in early November and things are busier than ever in the Web 2.0 world.  Along the way, I’ve managed to miss the one year anniversary of this blog, which I began back in late September of last year.  There have been over 2.5 million direct hits on this site since inception, a large percentage of it due to my Web 2.0 lists such as last year’s Best Web 2.0 Software List , but I also get e-mail frequently from die-hard readers as well.  Most importantly however, from all my conversations with people all over the world, it’s clear that Web 2.0 remains more than ever a topic of major popular interest and industry fascination.

While the general understanding of Web 2.0 is improving all the time, we have a ways to go before we have a concise, generally accepted definition.  My favorite is still networked applications that explicitly leverage network effects. But while most of what we ascribe to the Web 2.0 name falls out of these definition, it’s fairly hard for most of us to extrapolate meaningful ramifications from this.

People that read this blog know that I’m in the camp of folks that try to look beyond Ajax and the visual site design aspect of Web 2.0, and try to capture the deeper design patterns and business models that seem to be powering the most successful Web sites and online companies today.  Though concepts such as harnessing collective intelligence and Data as the Next Intel Inside, as described by Tim O’Reilly , most directly capture the spirit of the Web 2.0 era, it does seem to me that there are a few other elements that we haven’t nailed down yet.

At the AjaxWorld Conference and Expo earlier this month, I gave my usual talk about how to formally leverage Web 2.0, with plenty of examples coming from things happening out on the Web.  If you accept that it’s the power and size of the Web today , particularly the number of highly interactive network nodes (who are mostly people), give them extremely low-barrier tools, and we should be able to find plenty examples of emergent behavior; significant events happening suddenly and unexpectedly.  Tipping points are getting easier and easier to reach as site designers learn how to create better network effect triggers, draw large audiences suddenly, and as those same audiences increasingly self-organize spontaneously, such as in the KatrinaList project (suddenly) or Wikipedia (slower but bigger).

And it’s the arrival of Web 2.0 “supersites” like YouTube , which appear suddenly, often riding the coattails of other major Web 2.0 site’s ecosystems, and apply aggressive, viral network effects that show us the true, full scale of the possibilities.  Building a Web site worth over one billion dollars in 18 months is a very impressive result, but it’s really only a single axis upon which Web 2.0 can be applied successfully.  Another axis upon which to apply Web 2.0 focuses less on pulling in every single user possible with a horizontal network effect, but on building a difficult to reproduce but highly valuable data source, such as the Navteq mapping database, or Zillow’s real estate database.  One might argue that these are still very horizontal but these are merely just well known examples.

The variety and depth of the Web is such that not every Web 2.0 site will have tens of millions of users, nor should it.  An effective Web 2.0 site is largely powered by its users, whose feedback and contributions, direct and indirect, make the site a living ecosystem that evolves from day to day, a mosaic as rich and varied as a sites users would like it to be.  In other words, creating a high quality architectures of participation is becoming a strategic competitive advantage in many areas.

I’m often asked, particularly after one of my presentations on Web 2.0, to articulate the most important and effective actions a site designer can take to realize the benefits of Web 2.0.  As a result, I’ve created the list below in a attempt to catpure a good, general purpose overview of what these steps are.  My plan in the near future, is to dive into each one of these as much as time permits and explain how they make highly effective Web 2.0 sites not only effective, but often possible at all.  In the meantime, please take them for what they’re worth, I believe however that they are instrumental in making a Web site or application the most successful possible.

The Essentials of Leveraging Web 2.0

  • Ease of Use is the most important feature of any Web site, Web application, or program.
  • Open up your data as much possible. There is no future in hoarding data, only controlling it.
  • Aggressively add feedback loops to everything.  Pull out the loops that don’t seem to matter and emphasize the ones that give results.
  • Continuous release cycles.  The bigger the release, the more unwieldy it becomes (more dependencies, more planning, more disruption.)  Organic growth is the most powerful, adaptive, and resilient.
  • Make your users part of your software.  They are your most valuable source of content, feedback, and passion.  Start understanding social architecture.  Give up non-essential control.  Or your users will likely go elsewhere.
  • Turn your applications into platforms. An application usually has a single predetermined use while a platform is designed to be the foundation of something much bigger.  Instead of getting a single type of use from your software and data, you might get hundreds or even thousands of additional uses.
  • Don’t create social communities just to have them. They aren’t a checklist item.  But do empower inspired users to create them.

Of course, there a lot of work in the details and these are just some of the important, general essentials.  Unfortunately, a lot of careful thinking, planning, and engineering goes into any effective Web 2.0 site and it’s having these ideas at the core of it, which can help you get the best results.

Final Note:  I’ll be on the road the next two weeks and will be at the Web 2.0 Conference in San Francisco from Nov. 7th-9th.  I’ll be there writing coverage for the Web 2.0 Journal and here as much as possible.  If you’re going to be there, please drop me a line if you’d like to meet.


Is Google Adding Blog Search to Google.com?

December 1, 2006

Once-a-month blogger, occasional web designer and UK based family guy Andy Boyd has posted a screen capture of blog search results appearing on the front page of a regular Google web search. A number of other bloggers have picked this up but no one else has been able to reproduce Boyd’s experience. It’s a believable scenario because Google recently added blog search to Google News last month and to Google Alerts four days later. The UK, where Boyd lives, is frequently a testing ground for new Google features right around the corner.

Obviously real estate even on the very bottom of the first page of Google results would be great for the blogosphere. If Google Blogsearch can’t get rid of all the splogs in its search results though, it could lead to some level of backlash against blogs in general. I don’t know anyone who’s as good at excluding splogs as Ask/Bloglines – they only display blog search results from blogs that at least one Bloglines user has subscribed to and they have algorithms to prevent gaming of that safeguard as well.

If Google really does take the bold step of including blog search on the front page of its web search results, it will be the first major search engine to do so. Blogs are included in general search results (see the search results for gay men social networking for example) and may have such good search engine optimization natively that they don’t need a special place on the page – but it could only help broaden exposure to the medium.


Marketing Your Business with a Blog

December 1, 2006

Brian Brown on Work.com provides a guide on how to use blogging as an effective way of marketing your business and staying in touch with your customers. The transparent nature of blogging allows you to connect, communicate and inform your current and potential customers in a way you just can’t through glossy brochures and slick corporate web sites. A blog can be a very nice compliment to your current marketing mix and costs virtually nothing to start up – just a change in a mindset from trying to be the keeper and protector of information to the sharer of it.

read more


Revolutionary Pen-Size Computer Uses Bluetooth Technology

December 1, 2006

A revolutionary new miniature computer is being worked on in Japan that comes in the shape of a pen that you can slip in to your pocket. It projects a monitor and keyboard on any flat surface that you can begin using like any regular PC computer. With its Bluetooth technology, it recognizes your key-presses and inputs as per usual. See the photos below. I’m trying to find out more about it and when it is expected to be available to the masses. Stay tuned!

 

 

 

 


How to Use Digg to Drive Traffic to Your Site

December 1, 2006

Guy Kawasaki on his blog today wrote about Neil Patel of Pronet Advertising who put together a beginner’s guide on how to use Digg to drive traffic to your web site. If you are not familiar with Digg, it’s a site where users can submit links to stories they find on the net they think are worthwhile. The Digg community can then vote on the story and “Digg” it or “Bury” it depending on what they think of it. Stories that receive the largest number of votes get promoted to the front page or “most popular” page. This can have a dramatic impact on the amount of traffic going to your site in a relatively short period of time, if the community likes your page. For more details on how Digg works, click here.


Profitably Running an Online Business in the Web 2.0 Era

December 1, 2006

One of the things I’m doing this week is preparing for a presentation at Web Builder 2.0 on how to monetize mashups in Las Vegas next week.  Consequently, I’ve been pulling together notes, talking to mashup creators, and studying real-world examples of how companies are applying innovative ways of generating revenue with Web 2.0 applications and open APIs.  Though there are all sorts of interesting emerging stories, such as the new Second Life millionaire, product developers are increasingly trying to explore the options beyond the obvious: namely big value acquisitions ala YouTube or the often fickle, if mostly workable, online advertising route.   But the biggest question that comes up is that if you let your users generate most of your content and then expose it all up via an API, how can a profitable business be made from this?

 

This has been the question from the outset, and though you can build enormously successful sites in terms of numbers of users and amounts of content using Web 2.0 techniques, the best means of monetizing this remain a larger unproven endeavor.  I wrote a while back on the struggle to monetize Web 2.0 where I explored in detail the strategic and tactical methods for making next generation Web sites financially viable, even successful.

If you refer to my original article on monetizing Web 2.0, I identified three tactical means for generating revenue (advertising, subscriptions, and commissions) and a series of strategies that can support them.  While it’s usually fairly clear how the direct revenue models work, it’s usually less clear to people how the indirect strategies can directly influence the opportunities.

Strategies for Making the Most from Web 2.0 

    • There are direct (the 3 items above) and numerous indirect ways to monetize Web 2.0 that often go unappreciated
    • Some of the indirect ways which lead to revenue growth, user growth, and increased resistance to competition — which in turn lead to increased subscriptions, advertising, and commission revenue — are:
      • Strategic Acquisition: Identifying and acquiring Web 2.0 companies on the exponential growth curve before the rest of the market realizes what it’s worth (early exploitation of someone else’s network effects.)
      • Maintaining control of hard to recreate data sources.  This is basically turning walled gardens into fenced gardens:  Let users access everything, but not let them keep it, such as Google providing access to their search index only over the Web.
      • Building Attention Trust – By being patently fair with customer data and leveraging user’s loyalty, you can get them to share more information about themselves that in turns leads to much better products and services tailored to them.
      • Turning Applications into Platforms: One single use of an application is simply a waste of software.  Turn applications into platforms and get 5, 50, or 5,000 additional uses (Amazon has over 50,000 users of its line of business APIs) for example.  Online platforms are actually very easy to monetize but having compelling content or services first is a prerequisite.
      • Fully Automated Online Customer Self-Service: Let users get what they want, when they want it, without help.  Seems easy but almost all companies have people in the loop to manage the edge-cases.  Unfortunately, edge cases represent the The Long Tail of customer service.  This is hard but in the end provides goods and services with much tighter feedback loops.  And it’s also a mandatory prerequisite for cost effectively serving mass micromarkets.  In other words, you can’t directly monetize The Long Tail without this.

Lying directly in the primary tenets of Web 2.0 however, are a series of two-edged issues from a revenue perspective.  Though the concepts and ideas are powerful when applied appropriately, they can also pose significant short-term and long-term challenges.  Below are the basic principles of Web 2.0 along with the positive and negative revenue implications for most companies on the Web today, even ones that aren’t fully embracing it yet.

 Revenue Implications for Web 2.0 Principles (not meant to be exhaustive)

  • Principle 1: Web as Platform
    • Upside:  Revenue scalability (1 billion users on the Web), rapid growth potential and reach through exploitation of network effects
    • Downside: Competition is only a URL away, often requiring significant investment in differentiation
  • Principle 2: Software Above a Single Device
    • Upside: More opportunities to deliver products and services to users in more situations
    • Downside: Upfront costs, more infrastructure, more development/testing/support (costs) to deliver products across multiple devices
  • Principle 3: Data is the Next “Intel Inside”
    • Upside: Customer loyalty and even lock-in
    • Downside:  Lack of competitive pressure leading to complacency, long-term potential antitrust issues
  • Principle 4: Lightweight Programming & Business Models
  • Principle 5: Rich User Experiences
    • Upside: More productive and satisfied users, competitive advantage
    • Downside: Higher cost of development, potentially lower new user discoverability and adoption
  • Principle 6: Harnessing Collective Intelligence
    • Upside: Much lower costs of production, higher rate of innovation, dramatically larger overall content output
    • Downside Implications: Lower level of control, governance issues (increased dependance on user base), content management issues, and legal exposure over IP\n
  • Principle 7: Leverage the Long Tail\n
    • Upside Implications: Cost-effectively reach thousands of small, previously unprofitable market segments resulting in overall customer growth
    • \nDownside Implications: Upfront investment costs can be very significant, managing costs of customer service long-term

I hope that helps. I\’m barely on track for 4pm, but let\’s talk anyway and review the latest…\n
\n\n”,0] ); D([“ce”]); //–>: Much lower costs of production, higher rate of innovation, dramatically larger overall content output

  • Downside: Lower level of direct control, governance issues (increased dependence on user base), content management issues, and legal exposure over IP
  • Principle 7: Leveraging The Long Tail
    • Upside: Cost-effectively reach thousands of small, previously unprofitable market segments resulting in overall customer growth
    • Downside: Upfront investment costs can be very significant, managing costs of customer service long-term
  • While a great many startups are not generating revenue in huge quantities yet, the companies that have been diligently exploiting open APIs such as Amazon and Salesforce are in fact generating significant revenue and second order effects from opening up their platforms and being careful not to lose control.  This is actually a large discussion, and as large Web 2.0 sites continue to emerge, we’ll continue to keep track of what the successful patterns and practices are.

    What other implications are there by putting users in control of content generation and opening everything up?


    Putting Web 2.0 in Perspective

    December 1, 2006

    Jiyan Wei, Vice President, Online Media, v-Fluence Interactive Public Relations

    At a recent conference, I overheard one communications professional ask another, “What is our Web 2.0 strategy?”

    I wondered what exactly they meant by a Web 2.0 strategy, despite my background and experience with the Web and new media, and, soon after, decided to register for the official Web 2.0 conference, set to take place November 7-9 in San Francisco to learn more. I visited the conference Web site and found a quote from Ross Mayfield, “Web 1.0 was commerce. Web 2.0 is people,” as well as an impressive guest roster featuring a mix of voices from both traditional and new media firms. However, when I went to sign up, the $3,000 event was sold out. I then began to search for a comparable event and stumbled upon the Web site for ‘Web 2point1,’ where I learned that the non-profit organization running the site had chosen that particular moniker after being threatened with legal action by O’Reilly Media (who coined the term Web 2.0) and CMP.

    I couldn’t help but note the irony that the Web 2.0 conference was cost-prohibitive to most ordinary folks, and that a non-profit had been sued by the organizers of the Web 2.0 conference for attempting to use the same name. After all, isn’t Web 2.0 supposed to be about participatory media and collaborative development? Isn’t it about people?

    What exactly is Web 2.0, and will it replace what we now know as the Web and the way in which we all communicate, as many seem to claim? |inline


    Sneak Peak at the 2007 Tracks

    December 1, 2006

    I don’t normally share out the track structure before it is released publically via the TechEd website but what the heck!  Below are the tracks by which all of the content (sessions, labs, chalk-talks) will be organized.  This is shaping up to be an incredible year in terms of content with an unbelievable amount of new topics to cover in-depth.  This will be the most important year in TechEd’s history with the launch of Office 2007, Windows Vista, Exchange Server 2007, various Management Products (MOM 2007, System Center Essentials, Configuration Manager 2007, etc) Windows Server Code-named “Longhorn”, “Orcas” as well as a sneak peak at the next version of SQL Server. Determining the allocations of sessions between these technologies is by far the most difficult part of my job.

    So here they are….the ’07 track structure:

    • Architecture
    • Business Applications (Dynamics products)
    • Business Intelligence
    • Connected Systems (Biztalk, WCF, etc)
    • Database Development and Administration
    • Management and Operations
    • Windows Server Infrastructure
    • Developer Tools and Technologies
    • Web Development and Infrastructure
    • Mobility
    • Office System
    • Security
    • Unified Communications (NEW!)
    • Windows Client
    • Microsoft IT
    • Identity and Access (NEW!)


    Social Software for Learning

    December 1, 2006


    I had the opportunity yesterday to participate in an online forum using Elluminate as part of The Social Software/Web 2.0 Technologies Research Project which is funded by the Australian Flexible Learning Framework’s Knowledge Sharing Services and Research and Policy Advice Projects.

    It’s really great to be able to be a part of this sort of forum and participate in discussions focusing on research around this emerging area of interest and activity in the online world – particularly as it pertains to education.

    There were some really interesting examples of the educational use of Social Software being shared in the forum – and a whole lot more shared on the wiki, some relating to the use of SS with students, and others in relation to the use of SS for professional development. It always impresses me how creative and imaginative some teachers can be with new tools and environments like this.

    I can’t help but observe, however, the ongoing point of tension in these sorts of discussions. The very fact that we are looking at how to integrate the use of SS into our teaching and learning programmes assumes that this is (a)possible and (b)desirable.

    Social software, by its very nature, is essentially about providing forms of expression for individuals who are then connected with other individuals to form multi-layered networks based on common areas of interest or concern. These networks thrive on the contributions of the individuals, both to their personal environments and to the environments of others. The networks tend to be very democratic and fluid, with structure and form being determined by the participants.

    Contrast that with the adoption of such environments within formal education processes. Regardless of how well intentioned the teacher/tutor may be, there is inevitably a level of imposed structure and expectation brought to bear. Formal education experiences are by nature characterised by being time bound, requiring assessment and adhering to a curriculum. All of these parameters are (generally) established externally to the participants. Further, choosing to become a participant in a course does not automatically assume one might choose to become a ‘blogger’ for instance – and we observe how important personal motivation and ‘ownership’ is in maintaining a profile within the social networking space.

    The relationship between the use of social software by individuals and its appropriation within formal teaching and learning situations is what I’ve tried to illustrate in my recent post on MLEs and PLEs, and also in my paper on the scope of the PLE.

    Our use of these environments is still at an emergent stage, and research such as this will provide some much needed insights into what is working well – and what isn’t. The research team of Val Evans, Susan Stolz and Larraine Larri have also established a blog in which they invite people to contribute thoughts and ideas connected with their research questions. With an increasing number of people becoming interested in making the use of social software a focus of research, this might be a useful forum to become a part of. Although it is focused on the post-school sector (VET), there are plenty of lessons that could be learned (and contributed) from those who are using social software in other areas of the education system.