The Project Gutenberg Etext of E-books and e-publishing by Sam Vaknin
#4 in our series by Sam Vaknin

** This is a COPYRIGHTED Project Gutenberg Etext, Details Below **
**     Please follow the copyright guidelines in this file.     **


Copyright (C) 2000 Copyright Lidija Rangelovska.

We encourage you to keep this file, exactly as it is, on your
own disk, thereby keeping an electronic path open for future
readers.  Please do not remove this header information.

This header should be the first thing seen when anyone starts to
view the etext. Do not change or edit it without written permission.
The words are carefully chosen to provide users with the
information they need to understand what they may and may not
do with the etext.


**Welcome To The World of Free Plain Vanilla Electronic Texts**

**Etexts Readable By Both Humans and By Computers, Since 1971**

*****These Etexts Are Prepared By Thousands of Volunteers!*****

Information on contacting Project Gutenberg to get etexts, and
further information, is included below.  We need your donations.

The Project Gutenberg Literary Archive Foundation is a 501(c)(3)
organization with EIN [Employee Identification Number] 64-6221541



Title: E-books and e-publishing

Author: Sam Vaknin

Release Date: December, 2003 [Etext #4742]
[Yes, we are more than one year ahead of schedule]
[This file was first posted on March 11, 2002]

Edition: 10

Language: English

Character set encoding: ASCII

The Project Gutenberg Etext of E-books and e-publishing by Sam Vaknin
*******This file should be named ebpub10.txt or ebpub10.zip******

Corrected EDITIONS of our etexts get a new NUMBER, ebpub11.txt
VERSIONS based on separate sources get new LETTER, ebpub10a.txt

We are now trying to release all our etexts one year in advance
of the official release dates, leaving time for better editing.
Please be encouraged to tell us about any error or corrections,
even years after the official publication date.

Please note neither this listing nor its contents are final til
midnight of the last day of the month of any such announcement.
The official release date of all Project Gutenberg Etexts is at
Midnight, Central Time, of the last day of the stated month.  A
preliminary version may often be posted for suggestion, comment
and editing by those who wish to do so.

Most people start at our sites at:
http://gutenberg.net or
http://promo.net/pg

These Web sites include award-winning information about Project
Gutenberg, including how to donate, how to help produce our new
etexts, and how to subscribe to our email newsletter (free!).


Those of you who want to download any Etext before announcement
can get to them as follows, and just download by date.  This is
also a good way to get them instantly upon announcement, as the
indexes our cataloguers produce obviously take a while after an
announcement goes out in the Project Gutenberg Newsletter.

http://www.ibiblio.org/gutenberg/etext03 or
ftp://ftp.ibiblio.org/pub/docs/books/gutenberg/etext03

Or /etext02, 01, 00, 99, 98, 97, 96, 95, 94, 93, 92, 92, 91 or 90

Just search by the first five letters of the filename you want,
as it appears in our Newsletters.


Information about Project Gutenberg (one page)

We produce about two million dollars for each hour we work.  The
time it takes us, a rather conservative estimate, is fifty hours
to get any etext selected, entered, proofread, edited, copyright
searched and analyzed, the copyright letters written, etc.   Our
projected audience is one hundred million readers.  If the value
per text is nominally estimated at one dollar then we produce $2
million dollars per hour in 2001 as we release over 50 new Etext
files per month, or 500 more Etexts in 2000 for a total of 4000+
If they reach just 1-2% of the world's population then the total
should reach over 300 billion Etexts given away by year's end.

The Goal of Project Gutenberg is to Give Away One Trillion Etext
Files by December 31, 2001.  [10,000 x 100,000,000 = 1 Trillion]
This is ten thousand titles each to one hundred million readers,
which is only about 4% of the present number of computer users.

At our revised rates of production, we will reach only one-third
of that goal by the end of 2001, or about 4,000 Etexts.  We need
funding, as well as continued efforts by volunteers, to maintain
or increase our production and reach our goals.

The Project Gutenberg Literary Archive Foundation has been created
to secure a future for Project Gutenberg into the next millennium.

We need your donations more than ever!

As of January, 2002, contributions are being solicited from people
and organizations in: Alabama, Alaska, Arkansas, Connecticut, Delaware,
Florida, Georgia, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky,
Louisiana, Maine, Michigan, Missouri, Montana, Nebraska, Nevada, New
Jersey, New Mexico, New York, North Carolina, Oklahoma, Oregon,
Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee,
Texas, Utah, Vermont, Virginia, Washington, West Virginia, Wisconsin,
and Wyoming.

As the requirements for other states are met, additions to this list
will be made and fund raising will begin in the additional states.
Please feel free to ask to check the status of your state.

In answer to various questions we have received on this:

We are constantly working on finishing the paperwork to legally
request donations in all 50 states.  If your state is not listed and
you would like to know if we have added it since the list you have,
just ask.

While we cannot solicit donations from people in states where we are
not yet registered, we know of no prohibition against accepting
donations from donors in these states who approach us with an offer to
donate.

International donations are accepted!  For more information
about donations, please view http://promo.net/pg/donation.html
We accept PayPal, as well as donation s via NetworkForGood.

Donation checks should be sent to:

Project Gutenberg Literary Archive Foundation
PMB 113
1739 University Ave.
Oxford, MS 38655-4109


The Project Gutenberg Literary Archive Foundation has been approved by
the US Internal Revenue Service as a 501(c)(3) organization with EIN
[Employee Identification Number] 64-622154.  Donations are
tax-deductible to the maximum extent permitted by law.  As fundraising
requirements for other states are met, additions to this list will be
made and fundraising will begin in the additional states.

We need your donations more than ever!

***

If you can't reach Project Gutenberg,
you can always email directly to:

Michael S. Hart <hart@pobox.com>

Prof. Hart will answer or forward your message.

We would prefer to send you information by email.


**Information prepared by the Project Gutenberg legal advisor**
(Three Pages)

***START** SMALL PRINT! for COPYRIGHT PROTECTED ETEXTS ***

TITLE AND COPYRIGHT NOTICE:

E-books and e-publishing, by Sam Vaknin
Copyright (C) 2000 Copyright Lidija Rangelovska.

This etext is distributed by Professor Michael S. Hart through the
Project Gutenberg Association (the "Project") under the "Project
Gutenberg" trademark and with the permission of the etext's
copyright owner.

Please do not use the "PROJECT GUTENBERG" trademark to market
any commercial products without permission.


LICENSE
You can (and are encouraged!) to copy and distribute this
Project Gutenberg-tm etext.  Since, unlike many other of the
Project's etexts, it is copyright protected, and since the
materials and methods you use will effect the Project's reputation,
your right to copy and distribute it is limited by the copyright
laws and by the conditions of this "Small Print!" statement.

  [A]  ALL COPIES: You may distribute copies of this etext
electronically or on any machine readable medium now known
or hereafter discovered so long as you:

     (1)  Honor the refund and replacement provisions of this
"Small Print!" statement; and

     (2)  Pay a royalty to the Foundation of 20% of the gross
profits you derive calculated using the method you already use
to calculate your applicable taxes.  If you don't derive
profits, no royalty is due.  Royalties are payable to "Project
Gutenberg Literary Archive Foundation" within the 60 days
following each date you prepare (or were legally required
to prepare) your annual (or equivalent periodic) tax return.

  [B]  EXACT AND MODIFIED COPIES: The copies you distribute
must either be exact copies of this etext, including this
Small Print statement, or can be in binary, compressed, mark-
up, or proprietary form (including any form resulting from
word processing or hypertext software), so long as *EITHER*:

     (1)  The etext, when displayed, is clearly readable, and
does *not* contain characters other than those intended by the
author of the work, although tilde (~), asterisk (*) and
underline (_) characters may be used to convey punctuation
intended by the author, and additional characters may be used
to indicate hypertext links; OR

     (2)  The etext is readily convertible by the reader at no
expense into plain ASCII, EBCDIC or equivalent form by the
program that displays the etext (as is the case, for instance,
with most word processors); OR

     (3)  You provide or agree to provide on request at no
additional cost, fee or expense, a copy of the etext in plain
ASCII.

LIMITED WARRANTY; DISCLAIMER OF DAMAGES
This etext may contain a "Defect" in the form of incomplete,
inaccurate or corrupt data, transcription errors, a copyright
or other infringement, a defective or damaged disk, computer
virus, or codes that damage or cannot be read by your
equipment.  But for the "Right of Replacement or Refund"
described below, the Project (and any other party you may
receive this etext from as a PROJECT GUTENBERG-tm etext)
disclaims all liability to you for damages, costs and
expenses, including legal fees, and YOU HAVE NO REMEDIES FOR
NEGLIGENCE OR UNDER STRICT LIABILITY, OR FOR BREACH OF
WARRANTY OR CONTRACT, INCLUDING BUT NOT LIMITED TO INDIRECT,
CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES, EVEN IF YOU
GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGES.

If you discover a Defect in this etext within 90 days of
receiving it, you can receive a refund of the money (if any)
you paid for it by sending an explanatory note within that
time to the person you received it from.  If you received it
on a physical medium, you must return it with your note, and
such person may choose to alternatively give you a replacement
copy.  If you received it electronically, such person may
choose to alternatively give you a second opportunity to
receive it electronically.

THIS ETEXT IS OTHERWISE PROVIDED TO YOU "AS-IS".  NO OTHER
WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, ARE MADE TO YOU AS
TO THE ETEXT OR ANY MEDIUM IT MAY BE ON, INCLUDING BUT NOT
LIMITED TO WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE.  Some states do not allow disclaimers of
implied warranties or the exclusion or limitation of
consequential damages, so the above disclaimers and exclusions
may not apply to you, and you may have other legal rights.

INDEMNITY
You will indemnify and hold Michael Hart and the Foundation,
and its trustees and agents, and any volunteers associated
with the production and distribution of Project Gutenberg-tm
texts harmless, from all liability, cost and expense, including
legal fees, that arise directly or indirectly from any of the
following that you do or cause:  [1] distribution of this etext,
[2] alteration, modification, or addition to the etext,
or [3] any Defect.

WHAT IF YOU *WANT* TO SEND MONEY EVEN IF YOU DON'T HAVE TO?
Project Gutenberg is dedicated to increasing the number of
public domain and licensed works that can be freely distributed
in machine readable form.

The Project gratefully accepts contributions of money, time,
public domain materials, or royalty free copyright licenses.
Money should be paid to the:
"Project Gutenberg Literary Archive Foundation."

If you are interested in contributing scanning equipment or
software or other items, please contact Michael Hart at:
hart@pobox.com


*SMALL PRINT! Ver.12.12.00 FOR COPYRIGHT PROTECTED ETEXTS*END*
Additional articles about Digital Content on the Web:
 
http://samvak.tripod.com/busiweb.html
 
http://www.trendsiters.com
 
Sam Vaknin's eBookWeb.org articles:
 
http://ebookweb.org.master.com/texis/master/search/?q=Vaknin
 
Sam Vaknin's "InternetContent" Author Archive:
 
http://www.internetcontent.net/AuthorProfile.asp?AuthorID=14
 
Essays dedicated to the new media, doing business on the web, digital 
content, its creation and distribution, e-publishing, e-books, digital 
reference, DRM technology, and other related issues.

http://samvak.tripod.com/internet.html
 
Digital Content on the Web Study Modules - 
 
http://www.blackboard.com/courses/digitalcontent/
 
This letter constitutes a permission to reprint or mirror any and all of 
the materials mentioned or linked to herein subject to appropriate credit 
and linkback.
 
Every article published MUST include the author bio, including the link to 
the author's web site, or link to it.
 
 
AUTHOR BIO:
Sam Vaknin is the author of Malignant Self Love - Narcissism Revisited and 
After the Rain - How the West Lost the East. He is a columnist for Central 
Europe Review, United Press International (UPI) and eBookWeb and the editor 
of mental health and Central East Europe categories in The Open Directory, 
Suite101 and searcheurope.com. 
Until recently, he served as the Economic Advisor to the Government of 
Macedonia. 
Visit Sam's Web site at http://samvak.tripod.com


 
 
 
The Articles (please scroll down to review them):
 
E-books and e-publishing

The Future of Electronic Publishing
 
I. The Disintermediation of Content
II. E(merging) Books 
III. Invasion of the Amazons 
IV. Revolt of the Scholars
V. The Kidnapping of Content
VI. The Miraculous Conversion
VII. The Medium and the Message
VIII. The Idea of Reference
IX. Will Content ever be Profitable?
X. Jamaican OverDrive - LDC's and LCD's
XI. An Embarrassment of Riches
XII. The Fall and Fall of p-Zines
XIII. The Internet and the Library
XIV. A Brief History of the Book
XV. The Affair of the Vanishing Content
XVI. Revolt of the Poor - The Demise of Intellectual Property
XVII. The Territorial Web
XVIII. The In-credible Web
 
Web Technology and Trends
 
I. Bright Planet, Deep Web
II. The Seamless Internet
III. The Polyglottal Internet
IV. Deja Googled
V. Maps of Cyberspace

The Internet and the Digital Divide

I. The Internet  A Medium or a Message?
II. The Internet in the Countries in Transition
III. The Selfish Net  The Semantic Web
 
Author: Sam Vaknin 
 
Contact Info: palma@unet.com.mk; samvak@visto.com



 
E-BOOKS AND E-PUBLISHING

The Future of Electronic Publishing
By: Sam Vaknin 
UNESCO's somewhat arbitrary definition of "book" is: 
 
""Non-periodical printed publication of at least 49 pages excluding 
covers". 
 
The emergence of electronic publishing was supposed to change all that. Yet 
a bloodbath of unusual proportions has taken place in the last few months. 
Time Warner's iPublish and MightyWords (partly owned by Barnes and Noble) 
were the last in a string of resounding failures which cast in doubt the 
business model underlying digital content. Everything seemed to have gone 
wrong: the dot.coms dot bombed, venture capital dried up, competing 
standards fractured an already fragile marketplace, the hardware (e-book 
readers) was clunky and awkward, the software unwieldy, the e-books badly 
written or already in the public domain. 
 
Terrified by the inexorable process of disintermediation (the establishment 
of direct contact between author and readers, excluding publishers and 
bookstores) and by the ease with which digital content can be replicated - 
publishers resorted to draconian copyright protection measures 
(euphemistically known as "digital rights management"). This further 
alienated the few potential readers left. The opposite model of "viral" or 
"buzz" marketing (by encouraging the dissemination of free copies of the 
promoted book) was only marginally more successful. 
 
Moreover, e-publishing's delivery platform, the Internet, has been 
transformed beyond recognition since March 2000. 
 
From an open, somewhat anarchic, web of networked computers - it has 
evolved into a territorial, commercial, corporate extension of "brick and 
mortar" giants, subject to government regulation. It is less friendly 
towards independent (small) publishers, the backbone of e-publishing. 
Increasingly, it is expropriated by publishing and media behemoths. It is 
treated as a medium for cross promotion, supply chain management, and 
customer relations management. It offers only some minor synergies 
with non-cyberspace, real world, franchises and media properties. The likes 
of Disney and Bertelsmann have swung a full circle from considering the 
Internet to be the next big thing in New Media delivery - to 
frantic efforts to contain the red ink it oozed all over their otherwise 
impeccable balance sheets.
 
But were the now silent pundits right all the same? Is the future of 
publishing (and other media industries) inextricably intertwined with the 
Internet?
 
The answer depends on whether an old habit dies hard. Internet surfers are 
used to free content. They are very reluctant to pay for information (with 
precious few exceptions, like the "Wall Street Journal"'s electronic 
edition). Moreover, the Internet, with 3 billion pages listed in the Google 
search engine (and another 15 billion in "invisible" databases), provides 
many free substitutes to every information product, no matter how superior. 
Web based media companies (such as Salon and Britannica.com) have been 
experimenting with payment and pricing models. But this is besides the 
point. Whether in the form of subscription (Britannica), pay per view 
(Questia), pay to print (Fathom), sample and pay to buy the physical 
product (RealRead), or micropayments (Amazon) - the public refuses to cough 
up. 
 
Moreover, the advertising-subsidized free content Web site has died 
together with Web advertising. Geocities - a community of free hosted, ad-
supported, Web sites purchased by Yahoo! - is now selectively shutting down 
Web sites (when they exceed a certain level of traffic) to convince their 
owners to revert to a monthly hosting fee model. With Lycos in trouble in 
Europe, Tripod may well follow suit shortly. Earlier this year, Microsoft 
has shut down ListBot (a host of discussion lists). Suite101 has stopped 
paying its editors (content authors) effective January 15th. About.com 
fired hundreds of category editors. With the ugly demise of Themestream, 
WebSeed is the only content aggregator which tries to buck the trend by 
relying (partly) on advertising revenue.
 
Paradoxically, e-publishing's main hope may lie with its ostensible 
adversary: the library. Unbelievably, e-publishers actually tried to limit 
the access of library patrons to e-books (i.e., the lending of e-books to 
multiple patrons). But, libraries are not only repositories of knowledge 
and community centres. They are also dominant promoters of new knowledge 
technologies. They are already the largest buyers of e-books. Together with 
schools and other educational institutions, libraries can serve as decisive 
socialization agents and introduce generations of pupils, students, and 
readers to the possibilities and riches of e-publishing. Government use of 
e-books (e.g., by the military) may have the same beneficial effect.
 
As standards converge (Adobe's Portable Document Format and Microsoft's MS 
Reader LIT format are likely to be the winners), as hardware improves and 
becomes ubiquitous (within multi-purpose devices or as standalone higher 
quality units), as content becomes more attractive (already many new titles 
are published in both print and electronic formats), as more versatile 
information taxonomies (like the Digital Object Identifier) are introduced, 
as the Internet becomes more gender-neutral, polyglot, and cosmopolitan - 
e-publishing is likely to recover and flourish. 
 
This renaissance will probably be aided by the gradual decline of print 
magazines and by a strengthening movement for free open source scholarly 
publishing. The publishing of periodical content and academic research 
(including, gradually, peer reviewed research) may be already shifting to 
the Web. Non-fiction and textbooks will follow. Alternative models of 
pricing are already in evidence (author pays to publish, author pays 
to obtain peer review, publisher pays to publish, buy a physical product 
and gain access to enhanced online content, and so on). Web site rating 
agencies will help to discriminate between the credible and the in-
credible. Publishing is moving - albeit kicking and screaming - online.


The Disintermediation of Content
By: Sam Vaknin
 
Are content brokers - publishers, distributors, and record companies - a 
thing of the past?
 
In one word: disintermediation
 
The gradual removal of layers of content brokering and intermediation - 
mainly in manufacturing marketing - is the continuation of a long term 
trend. Consider music for instance. Streaming audio on the internet ("soft 
radio"), or downloadable MP3 files may render the CD obsolete - but they 
were preceded by radio music broadcasts. But the novelty is that the 
Internet provides a venue for the marketing of niche products and reduces 
the barriers to entry previously imposed by the need to invest in costly 
"branding" campaigns and manufacturing and distribution activities.
 
This trend is also likely to restore the balance between artists and the 
commercial exploiters of their products. The very definition of "artist" 
will expand to encompass all creative people. One will seek to distinguish 
oneself, to "brand" oneself and to auction one's services, ideas, products, 
designs, experience, physique, or biography, etc. directly to end-users and 
consumers. This is a return to pre-industrial times when artisans ruled the 
economic scene. Work stability will suffer and work mobility will increase 
in a landscape of shifting allegiances, head hunting, remote collaboration, 
and similar labour market trends.
 
But distributors, publishers, and record companies are not going to vanish. 
They are going to metamorphose. This is because they fulfil a few functions 
and provide a few services whose importance is only enhanced by the "free 
for all" Internet culture.
 
Content intermediaries grade content and separate the qualitative from the 
ephemeral and the atrocious. The deluge of self-published and vanity 
published e-books, music tracks and art works has generated few 
masterpieces and a lot of trash. The absence of judicious filtering has 
unjustly given a bad name to whole segments of the industry (e.g., small, 
or web-based publishers). Consumers - inundated, disappointed and exhausted 
- will pay a premium for content rating services. Though driven by crass 
commercial considerations, most publishers and record companies do apply 
certain quality standards routinely and thus are positioned to provide 
these rating services reliably.
 
Content brokers are relationship managers. Consider distributors: they 
provide instant access to centralized, continuously updated, "addressbooks" 
of clients (stores, consumers, media, etc.). This reduces the time to 
market and increases efficiency. It alters revenue models very 
substantially. Content creators can thus concentrate on what they do best: 
content creation, and reduce their overhead by outsourcing the functions of 
distribution and relationships management. The existence of central 
"relationship ledgers" yields synergies which can be applied to all the 
clients of the distributor. The distributor provides a single address that 
content re-sellers converge on and feed off. Distributors, publishers and 
record companies also provide logistical support: warehousing, consolidated 
sales reporting and transaction auditing, and a single, periodic payment.
 
Yet, having said all that, content intermediaries still over-charge their 
clients (the content creators) for their services. This is especially true 
in an age of just-in-time inventory and digital distribution. Network 
effects mean that content brokers have to invest much less in marketing, 
branding and advertising once a product's first mover advantage is 
established. Economic laws of increasing, rather than diminishing, returns 
mean that every additional unit sold yields a HIGHER profit - rather than a 
declining one. The pie is getting bigger.
 
Hence, the meteoric increase in royalties publishers pay authors from sales 
of the electronic versions of their work (anywhere from Random House's 35% 
to 50% paid by smaller publishers). As this tectonic shift reverberates 
through the whole distribution chain, retail outlets are beginning to 
transact directly with content creators. The borders between the types of 
intermediaries are blurred. Barnes and Noble (the American bookstores 
chain) has, in effect, become a publisher. Many publishers have virtual 
storefronts. Many authors sell directly to their readers, acting as 
publishers. The introduction of "book ATMs" - POD (Print On Demand) 
machines, which will print  
every conceivable title in minutes, on the spot, in "book kiosks" - will 
give rise to a host of new intermediaries. Intermediation is not gone. It 
is here to stay because it is sorely needed. But it is in a state of flux. 
Old maxims break down. New modes of operation emerge. 
 
Functions are amalgamated, outsourced, dispensed with, or created from 
scratch. It is an exciting scene, full with opportunities.
 
 
 
E(merging) Books
By: Sam Vaknin
 
A novel re-definition through experimentation of the classical format of 
the book is emerging. 
Consider the now defunct BookTailor. It used to sell its book customization 
software mainly to travel agents - but such software is likely to conquer 
other niches (such as the legal and medical professions). It allows users 
to select bits and pieces from a library of e-books, combine them into a 
totally new tome and print and bind the latter on demand. The client can 
also choose to buy the end-product as an e-book. Consider what this simple 
business model does to entrenched and age old notions such as "original"  
and "copies", copyright, and book identifiers. What is the "original" in 
this case? Is it the final, user-customized book - or its sources? And if 
no customized book is identical to any other - what happens to the 
intuitive notion of "copies"? Should BookTailor-generated books considered 
to be unique exemplars of one-copy print runs? If so, should each one 
receive a unique identifier (for instance, a unique ISBN)? Does the user 
possess any rights in the final product, composed and selected by him? What 
about the copyrights of the original authors? 
Or take BookCrossing.com. On the face of it, it presents no profound 
challenge to established publishing practices and to the modern concept of 
intellectual property. Members register their books, obtain a BCID 
(BookCrossing ID Number) and then give the book to someone, or simply leave 
it lying around for a total stranger to find. Henceforth, fate determines 
the chain of events. Eventual successive owners of the volume are supposed 
to report to BookCrossing (by e-mail) about the book's and their 
whereabouts, thereby generating moving plots and mapping the territory of 
literacy and bibliomania. This innocuous model subversively undermines the 
concept - legal and moral - of ownership. It also expropriates the book 
from the realm of passive, inert objects and transforms it into a catalyst 
of human interactions across time and space. In other words, it returns the 
book to its origins: a time capsule, a time machine and the embodiment of a 
historical narrative. 
E-books, hitherto, have largely been nothing but an ephemeral rendition of 
their print predecessors. But e-books are another medium altogether. They 
can and will provide a different reading experience.  Consider "hyperlinks 
within the e-book and without it - to web content, reference works, etc., 
embedded instant shopping and ordering links, divergent, user-interactive, 
decision driven plotlines, interaction with other e-books (using Bluetooth 
or another wireless standard), collaborative authoring, gaming and 
community activities, automatically or periodically updated content, 
,multimedia capabilities, database, Favourites and History Maintenance 
(records of reading habits, shopping habits, interaction with other 
readers, plot related decisions and much more), automatic and embedded 
audio conversion and translation capabilities, full wireless piconetworking 
and scatternetworking capabilities and more". 

INVASION OF THE AMAZONS
By: Sam Vaknin
 
The last few months have witnessed a bloodbath in tech stocks coupled with 
a frantic re-definition of the web and of every player in it (as far as 
content is concerned). 
 
This effort is three pronged:
 
Some companies are gambling on content distribution and the possession of 
the attendant digital infrastructure. MightyWords, for example, stealthily 
transformed itself from a "free-for-all-everyone-welcome" e-publisher to a 
distribution channel of choice works (mainly by midlist authors). It now 
aims to feed its content to content-starved web sites. In the process, it 
shed thousands of unfortunate authors who did not meet its (never stated) 
sales criteria. 
 
Others bet the farm on content creation and packaging. Bn.com invaded the 
digital publishing and POD (Print on Demand) businesses in a series of 
lightning purchases. It is now the largest e-book store by a wide margin.
 
But Amazon seemed to have got it right once more. The web's own virtual 
mall and the former darling of Wall Street has diversified into 
micropayments.
 
The Internet started as a free medium for free spirits. E-commerce was once 
considered a dirty word. Web surfers became used to free content. Hence the 
(very low) glass ceiling on the price of content made available through the 
web - and the need to charge customers less than 1 US dollars to a few 
dollars per transaction ("micro-payments"). Various service providers (such 
as Pay-Pal) emerged, none became sufficiently dominant and all-pervasive to 
constitute a standard. Web merchants' ability to accept micropayments is 
crucial. E-commerce (let alone m-commerce) will never take off without it.
 
Enter Amazon. Its "Honour System" is licenced to third party web sites 
(such as Bartleby.com and SatireWire). It allows people to donate money or 
effect micro-payments, apparently through its patented one-click system. As 
far as the web sites are concerned, there are two major drawbacks: all 
donations and payments are refundable within 30 days and Amazon charges 
them 15 cents per transaction plus 15(!) percent. By far the worst deal in 
town.
 
So, why the fuss?
 
Because of Amazon's customer list. This development emphasizes the growing 
realization that one's list of customers - properly data mined - is the 
greatest asset, greater even than original content and more important than 
distribution channels and digital right management or asset management 
applications. Merchants are willing to pay for access to this ever 
expanding virtual neighbourhood (even if they are not made privy to 
the customer information collected by Amazon). 
 
The Honour System looks suspiciously similar to the payment system designed 
by Amazon for Stephen King's serialized e-novel, "The Plant". Interesting 
to note how the needs of authors and publishers are now in the driver's 
seat, helping to spur along innovations in business methods. 
 
 
 
Revolt of the Scholars
By: Sam Vaknin
 
http://www.realsci.com/
 
Scindex's Instant Publishing Service is about empowerment. The price of 
scholarly, peer-reviewed journals has skyrocketed in the last few years, 
often way out of the limited means of libraries, universities, individual 
scientists and scholars. A "scholarly divide" has opened between the haves 
(academic institutions with rich endowments and well-heeled corporations) 
and the haves not (all the others). Paradoxically, access to authoritative 
and authenticated knowledge has declined as the number of professional 
journals has proliferated. This is not to mention the long (and often 
crucial) delays in publishing research results and the shoddy work of many 
under-paid and over-worked peer reviewers.
 
The Internet was suppose to change all that. Originally, a computer network 
for the exchange of (restricted and open) research results among scientists 
and academics in participating institutions - it was supposed to provide 
instant publishing, instant access and instant gratification. It has 
delivered only partially. Preprints of academic papers are often placed 
online by their eager authors and subjected to peer scrutiny. But this 
haphazard publishing cottage industry did nothing to dethrone the print 
incumbents and their avaricious pricing. 
 
The major missing element is, of course, respectability. But there are 
others. No agreed upon content or knowledge classification method has 
emerged. Some web sites (such as Suite101) use the Dewey decimal system. 
Others invented and implemented systems of their making. Additionally, one 
click publishing technology (such as Webseed's or Blogger's) came to be 
identified strictly to non-scholarly material: personal reminiscences, 
correspondence, articles and news.
 
Enter Scindex and its Academic Resource Channel. Established by academics 
and software experts from Bulgaria, it epitomizes the tearing down of 
geographical barriers heralded by the Internet. But it does much more than 
that. Scindex is a whole, self-contained, stand-alone, instant self-
publishing and self-assembly system. Self-publishing systems do exist (for 
instance, Purdue University's) - but they incorporate only certain 
components. Scindex covers the whole range.
 
Having (freely) registered as a member, a scientist or a scholar 
can publish their papers, essays, research results, articles and comments 
online. They have to submit an abstract and use Sciendex's classification 
("call") numbers and science descriptors, arranged in a massive directory 
available in the "RealSci Locator". The Locator can be also downloaded and 
used off-line and its is surprisingly user-friendly. The submission process 
itself is totally automated and very short.
 
The system includes a long series of thematic journals. These journals 
self-assemble, in accordance with the call numbers selected by the 
submitters. An article submitted with certain call numbers will 
automatically be included in the relevant journals. 
 
The fly in the ointment is the absence of peer review. As the system moves 
from beta to commercialization, Scindex intends to address this issue by 
introducing a system of incentives and inducements. Reviewers will be 
granted "credit points" to be applied against the (paid) publication of 
their own papers, for instance. 
 
Scindex is the model of things to come. Publishing becomes more and more 
automated and knowledge-orientated. Peer reviewed papers become 
more outlandishly expensive and irrelevant. Scientists and scholars are 
getting impatient and rebellious. The confluence of these three trends 
spells - at the least - the creation of a web based universe of 
parallel and alternative scholarly publishing. 
 
 
 
The Kidnapping of Content
By: Sam Vaknin
 
http://www.plagiarism.org and http://www.Turnitin.com
 
Latin kidnapped the word "plagion" from ancient Greek and it ended up in 
English as "plagiarism". It literally means "to kidnap" - most commonly, to 
misappropriate content and wrongly attribute it to oneself. It is a close 
kin of piracy. But while the software or content pirate does not bother to 
hide or alter the identity of the content's creator or the software's 
author - the plagiarist does. Plagiarism is, therefore, more pernicious 
than piracy.
 
Enter Turnit.com. An off-shoot of  www.iparadigms.com, it was established 
by a group of concerned (and commercially minded) scientists from UC 
Berkeley. 
 
Whereas digital rights and asset management systems are geared to prevent 
piracy - plagiarism.org and its commercial arm, Turnit.com, are the cyber 
equivalent of a law enforcement agency, acting after the fact to discover 
the culprits and uncover their misdeeds. This, they claim, is a first stage 
on the way to a plagiarism-free Internet-based academic community of both 
teachers and students, in which the educational potential of the Internet 
can be fully realized.
 
The problem is especially severe in academia. Various surveys have 
discovered that a staggering 80%(!) of US students cheat and that at least 
30% plagiarize written material. The Internet only exacerbated this 
problem. More than 200 cheat-sites have sprung up, with thousands of papers 
available on-line and tens of thousands of satisfied plagiarists the world 
over. Some of these hubs - like cheater.com, cheatweb or cheathouse.com - 
make no bones about their offerings. Many of them are located outside the 
USA (in Germany, or Asia) and at least one offers papers in a few 
languages, Hebrew included.
 
The problem, though, is not limited to the ivory towers. E-zines 
plagiarize. The print media plagiarize. Individual journalists plagiarize, 
many with abandon. Even advertising agencies and financial institutions 
plagiarize. The amount of material out there is so overwhelming that the 
plagiarist develops a (fairly justified) sense of immunity. The 
temptation is irresistible, the rewards big and the pressures 
of modern life great.
 
Some of the plagiarists are straightforward copiers. Others substitute 
words, add sentences, or combine two or more sources. This raises the 
question: "when should content be considered original and when - 
plagiarized?". Should the test for plagiarism be more stringent than the 
one applied by the Copyright Office? And what rights are implicitly granted 
by the material's genuine authors or publishers once they place the content 
on the Internet? Is the Web a public domain and, if yes, to what extent? 
These questions are not easily answered. Consider reports generated by 
users from a database. Are these reports copyrighted - and if so, by whom - 
by the database compiler or by the user who defined the parameters, without 
which the reports in question would have never been generated? What about 
"fair use" of text and works of art? In the USA, the backlash against 
digital content piracy and plagiarism has reached preposterous legal, 
litigious and technological nadirs. 
 
Plagiarism.org has developed a statistics-based technology (the "Document 
Source Analysis") which creates a "digital fingerprint" of every document 
in its database. Web crawlers are then unleashed to scour the Internet and 
find documents with the same fingerprint and a colour-coded report is 
generated. An instructor, teacher, or professor can then use the report to 
prove plagiarism and cheating. 
 
Piracy is often considered to be a form of viral marketing (even by 
software developers and publishers). The author's, publisher's, or software 
house's data are preserved intact in the cracked copy. Pirated copies of e-
books often contribute to increased sales of the print versions. Crippled 
versions of software or pirated copies of software without its manuals, 
updates and support - often lead to the purchase of a licence. Not so with 
plagiarism. The identities of the author, editor, publisher and illustrator 
are deleted and replaced by the details of the plagiarist. And while piracy 
is discussed freely and fought vigorously - the discussion of plagiarism is 
still taboo and actively suppressed by image-conscious and endowment-weary 
academic institutions and media. It is an uphill struggle but 
plagiarism.org has taken the first resolute step.
 
 
 
The Miraculous Conversion
By: Sam Vaknin
 
http://www.ideavirus.com
 
 
The recent bloodbath among online content peddlers and digital media 
proselytisers can be traced to two deadly sins. The first was to assume 
that traffic equals sales. In other words, that a miraculous conversion 
will spontaneously occur among the hordes of visitors to a web site. It was 
taken as an article of faith that a certain percentage of this mass will 
inevitably and nigh hypnotically reach for their bulging pocketbooks and 
purchase content, however packaged. Moreover, ad revenues (more reasonably) 
were assumed to be closely correlated with "eyeballs". This myth led to an 
obsession with counters, page hits, impressions, unique visitors, 
statistics and demographics. 
 
It failed, however, to take into account the dwindling efficacy of what 
Seth Godin, in his brilliant essay ("Unleashing the IdeaVirus"), calls 
"Interruption Marketing" - ads, banners, spam and fliers. It also ignored, 
at its peril, the ethos of free content and open source prevalent among the 
Internet opinion leaders, movers and shapers. These two neglected aspects 
of Internet hype and culture led to the trouncing of erstwhile promising 
web media companies while their business models were exposed as wishful 
thinking. 
 
The second mistake was to exclusively cater to the needs of a highly 
idiosyncratic group of people (Silicone Valley geeks and nerds). The 
assumption that the USA (let alone the rest of the world) is Silicone 
Valley writ large proved to be calamitous to the industry. 
 
In the 1970s and 1980s, evolutionary biologists like Richard Dawkins and 
Rupert Sheldrake developed models of cultural evolution. Dawkins' "meme" is 
a cultural element (like a behaviour or an idea) passed from one individual 
to another and from one generation to another not through biological -
genetic means - but by imitation. Sheldrake added the notion of contagion - 
"morphic resonance" - which causes behaviour patterns to suddenly emerged 
in whole populations. Physicists talked about sudden "phase transitions", 
the emergent results of a critical mass reached. A latter day thinker, 
Michael Gladwell, called it the "tipping point".
 
Seth Godin invented the concept of an "ideavirus" and an attendant 
marketing terminology. In a nutshell, he says, to use his own summation: 
 
"Marketing by interrupting people isn't cost-effective anymore. You can't 
afford to seek out people and send them unwanted marketing, in large groups 
and hope that some will send you money. Instead the future belongs to 
marketers who establish a foundation and process where interested people 
can market to each other. Ignite consumer networks and then get out of the 
way and let them talk."
 
This is sound advice with a shaky conclusion. The conversion from exposure 
to a marketing message (even from peers within a consumer network) - to an 
actual sale is a convoluted, multi-layered, highly complex process. It is 
not a "black box", better left unattended to. It is the same deadly sin all 
over again - the belief in a miraculous conversion. And it is highly US-
centric. People in other parts of the world interact entirely differently.
 
You can get them to visit and you get them to talk and you can get them to 
excite others. But to get them to buy - is a whole different ballgame. 
Dot.coms had better begin to study its rules.
 
 
 
The Medium and the Message  
By: Sam Vaknin 
 
 
A debate is raging in e-publishing circles: should content be encrypted and 
protected (the Barnes and Noble or Digital goods model) - or should it be 
distributed freely and thus serve as a form of viral marketing (Seth 
Godin's "ideavirus")? Publishers fear that freely distributed and cost-free 
"cracked" e-books will cannibalize print books to oblivion. 
 
The more paranoid point at the music industry. It failed to co-opt the 
emerging peer-to-peer platforms (Napster) and to offer a viable digital 
assets management system with an equitable sharing of royalties. The 
results? A protracted legal battle and piracy run amok. "Publishers" - goes 
this creed - "are positioned to incorporate encryption and protection 
measures at the very inception of the digital publishing industry. They 
ought to learn the lesson." 
 
But this view ignores a vital difference between sound and text. In music, 
what matter are the song or the musical piece. The medium (or carrier, or 
packing) is marginal and interchangeable. A CD, an audio cassette, or an 
MP3 player are all fine, as far as the consumer is concerned. The listener 
bases his or her purchasing decisions on sound quality and the faithfulness 
of reproduction of the listening experience (for instance, in a concert 
hall). This is a very narrow, rational, measurable and quantifiable 
criterion. 
 
Not so with text. 
 
Content is only one element of many of equal footing underlying the 
decision to purchase a specific text-"carrier" (medium). Various media 
encapsulating IDENTICAL text will still fare differently. Hence the failure 
of CD-ROMs and e-learning. People tend to consume content in other formats 
or media, even if it is fully available to them or even owned by them in 
one specific medium. People prefer to pay to listen to live lectures rather 
than read freely available online transcripts. Libraries buy print journals 
even when they have subscribed to the full text online versions of the very 
same publications. And consumers overwhelmingly prefer to purchase books in 
print rather than their e-versions. 
 
This is partly a question of the slow demise of old habits. E-books have 
yet to develop the user-friendliness, platform-independence, portability, 
browsability and many other attributes of this ingenious medium, the 
Gutenberg tome. But it also has to do with marketing psychology.  Where 
text (or text equivalents, such as speech) is concerned, the medium is at 
least as important as the message. And this will hold true even when e-
books catch up with their print brethren technologically. 
 
There is no doubting that finally e-books will surpass print books as a 
medium and offer numerous options:  hyperlinks within the e-book and 
without it - to web content, reference works, etc., embedded instant 
shopping and ordering links, divergent, user-interactive, decision driven 
plotlines, interaction with other e-books (using Bluetooth or another 
wireless standard), collaborative authoring, gaming and community 
activities, automatically or periodically updated content, ,multimedia 
capabilities, database, Favourites and History Maintenance (records of 
reading habits, shopping habits, interaction with other readers, plot 
related decisions and much more), automatic and embedded audio conversion 
and translation capabilities, full wireless piconetworking and 
scatternetworking capabilities and more. 
 
The same textual content will be available in the future in various media. 
Ostensibly, consumers should gravitate to the feature-rich and much cheaper 
e-book. But they won't - because the medium is as important as the text 
message. It is not enough to own the same content, or to gain access to the 
same message. Ownership of the right medium does count. Print books offer 
connectivity within an historical context (tradition). E-books are cold and 
impersonal, alienated and detached. The printed word offers permanence. 
Digital text is ephemeral (as anyone whose writings perished in the recent 
dot.com bloodbath or Deja takeover by Google can attest). Printed volumes 
are a whole sensorium, a sensual experience - olfactory and tactile and 
visual. E-books are one dimensional in comparison. These are differences 
that cannot be overcome, not even with the advent of digital "ink" on 
digital "paper". They will keep the print book alive and publishers' 
revenues flowing. 
 
People buy printed matter not merely because of its content. If this were 
true e-books will have won the day. Print books are a packaged experience, 
the substance of life. People buy the medium as often and as much as they 
buy the message it encapsulates. It is impossible to compete with this 
mistique. Safe in this knowledge, publishers should let go and impose on e-
books "encryption" and "protection" levels as rigorous as they do on the 
their print books. The latter are here to stay alongside the former. With 
the proper pricing and a modicum of trust, e-books may even end up 
promoting the old and trusted print versions.
  
 
The Idea of Reference
By: Sam Vaknin
 
http://www.britannica.com
 
There is no source of reference remotely as authoritative as the 
Encyclopaedia Britannica. There is no brand as venerable and as veteran as 
this mammoth labour of knowledge and ideas established in 1768. There is no 
better value for money. And, after a few sputters and bugs, it now comes in 
all shapes and sizes, including two CD-ROM versions (standard and deluxe) 
and an appealing and reader-friendly web site. So, why does it always 
appear to be on the brink of extinction?
 
The Britannica provides for an interesting study of the changing fortunes 
(and formats) of vendors of reference. As late as a decade ago, it was 
still selling in a leather-imitation bound set of 32 volumes. As print 
encyclopaedias went, it was a daring innovator and a pioneer of 
hyperlinked-like textual design. It sported a subject index, a lexical part 
and an alphabetically arranged series of in-depth essays authored by the 
best in every field of human erudition. 
 
When the CD-ROM erupted on the scene, the Britannica mismanaged the 
transition. As late as 1997, it was still selling a sordid text-only 
compact disc which included a part of the encyclopaedia. Only in 1998, did 
the Britannica switch to multimedia and added tables and graphs to the CD. 
Video and sound were to make their appearance even later. This error in 
trend analysis left the field wide open to the likes of Encarta and 
Grolier. The Britannica failed to grasp the irreversible shift from 
cumbersome print volumes to slender and freely searchable CD-ROMs. 
Reference was going digital and the Britannica's sales plummeted.
 
The Britannica was also late to cash on the web revolution - but, when it 
did, it became a world leader overnight. Its unbeatable brand was a 
decisive factor. A failed experiment with an annoying subscription model 
gave way to unrestricted access to the full contents of the Encyclopaedia 
and much more besides: specially commissioned articles, fora, an annotated 
internet guide, news in context, downloads and shopping. The site enjoys 
healthy traffic and the Britannica's CD-ROM interacts synergistically with 
its contents (through hyperlinks).
 
Yet, recently, the Britannica had to fire hundreds of workers (in its web 
division) and a return to a pay-for-content model is contemplated. What 
went wrong again? Internet advertising did. The Britannica's revenue model 
was based on monetizing eyeballs, to use a faddish refrain. When the 
perpetuum mobile of "advertisers pay for content and users get it free" 
crumbled - the Britannica found itself in familiar dire straits.
 
Is there a lesson to be learned from this arduous and convoluted tale? Are 
works of reference not self-supporting regardless of the revenue model 
(subscription, ad-based, print, CD-ROM)? This might well be the case. 
 
Classic works of reference - from Diderot to the Encarta - offered a series 
of advantages to their users:
 
1. Authority - Works of reference are authored by experts in their fields 
and peer-reviewed. This ensures both objectivity and accuracy.
 
2. Accessibility - Huge amounts of material were assembled under one 
"roof". This abolished the need to scour numerous sources of variable 
quality to obtain the data one needed.
 
3. Organization - This pile of knowledge was organized in a convenient and 
recognizable manner (alphabetically or by subject)
 
Moreover, authoring an encyclopaedia was such a daunting and expensive task 
that only states, academic institutions, or well-funded businesses were 
able to produce them. At any given period there was a dearth of reliable 
encyclopaedias, which exercised a monopoly on the dissemination of 
knowledge. Competitors were few and far between. The price of these tomes 
was, therefore, always exorbitant but people paid it to secure education 
for their children and a fount of knowledge at home. Hence the long gone 
phenomenon of "door to door encyclopaedia salesmen" and instalment plans.
 
Yet, all these advantages were eroded to fine dust by the Internet. The web 
offers a plethora of highly authoritative information authored and released 
by the leading names in every field of human knowledge and endeavour. The 
Internet, is, in effect, an encyclopaedia - far more detailed, far more 
authoritative, and far more comprehensive that any encyclopaedia can ever 
hope to be. The web is also fully accessible and fully searchable. What it 
lacks in organization it compensates in breadth and depth and recently 
emergent subject portals (directories such as Yahoo! or The Open Directory) 
have become the indices of the Internet. The aforementioned anti-
competition barriers to entry are gone: web publishing is cheap and 
immediate. Technologies such as web communities, chat, and e-mail enable  
massive collaborative efforts. And, most important, the bulk of the 
Internet is free. Users pay only the communication costs.
 
The long-heralded transition from free content to fee-based information may 
revive the fortunes of online reference vendors. But as long as the 
Internet - with its 2,000,000,000 (!) visible pages (and 5 times as many 
pages in its databases) - is free, encyclopaedias have little by way of a 
competitive advantage.



 
 
 
Will Content Ever be Profitable
By: Sam Vaknin
 
THE CURRENT WORRIES 
1. Content Suppliers 
The Ethos of Free Content 
Content Suppliers is the underprivileged sector of the Internet. They all 
lose money (even sites which offer basic, standardized goods - books, CDs), 
with the exception of sites profering sex or tourism. No user seems to 
begrateful for the effort and resources invested in creating and 
distributingcontent. The recent breakdown of traditional roles (between 
publisher andauthor, record company and singer, etc.) and the direct access 
the creative artist is gaining to its paying public may change this 
attitude of ingratitude but hitherto there are scarce signs of that. 
Moreover, it is either quality of presentation (which only a publisher can 
afford) or ownership and (often shoddy) dissemination of content by the 
author. A really qualitative, fully commerce enabled site costs up to 
5,000,000 USD,excluding site maintenance and customer and visitor services. 
Despite theseheavy outlays, site designers are constantly criticized for 
lack of creativity or for too much creativity. More and more is asked of 
content purveyors and creators. They are exploited by intermediaries, 
hitchhikersand other parasites. This is all an off-shoot of the ethos of 
the Internetas a free content area. 
Most of the users like to surf (browse, visit sites) the net without reason 
or goal in mind. This makes it difficult to apply to the web traditional 
marketing techniques. 
What is the meaning of "targeted audiences" or "market shares" in this 
context? If a surfer visits sites which deal with aberrant sex and nuclear 
physics in the same session - what to make of it? 
Moreover, the public and legislative backlash against the gathering of 
surfer's data by Internet ad agencies and other web sites - has led to 
growing ignorance regarding the profile of Internet users, their 
demography, habits, preferences and dislikes. 
"Free" is a key word on the Internet : it used to belong to the US 
Government and to a bunch of universities. Users like information, with 
emphasis on news and data about new products. But they do not like to shop 
on the net - yet. Only 38% of all surfers made a purchase during 1998. 
It would seem that users will not pay for content unless it is unavailable 
elsewhere or qualitatively rare or made rare. One way to "rarefy" content 
is to review and rate it. 
2. Quality-rated Content 
There is a long term trend of clutter-breaking website-rating and critique. 
It may have a limited influence on the consumption decisions of some users 
and on their willingness to pay for content. Browsers already sport "What's 
New" and "What's Hot" buttons. Most Search Engines and directories 
recommend specific sites. But users are still cautious. Studies discovered 
that no user, no matter how heavy, has consistently re-visited more than 
200 sites,a minuscule number. Some recommendation services often produce 
random - at times, wrong - selections for their users. There are also 
concerns regardingprivacy issues. The backlash against Amazon's "readers 
circles" is an example. Web Critics, who work today mainly for the printed 
press, publishtheir wares on the net and collaborate with intelligent 
software whichhyperlinks to web sites, recommends them and refers users to 
them. Some web critics (guides) became identified with specific 
applications - really,expert systems -which incorporate their knowledge and 
experience. Most volunteer-based directories (such as the "Open Directory" 
and the late "Go" directory) work this way. 
The flip side of the coin of content consumption is investment in content 
creation, marketing, distribution and maintenance. 
3. The Money 
Where is the capital needed to finance content likely to come from? 
Again, there are two schools: 
According to the first, sites will be financed through advertising -  and 
so will search engines and other applications accessed by users. 
Certain ASPs (Application Service Providers which rent out access to 
application software which resides on their servers) are considering this 
model. 
The recent collapse in online advertising rates and click-through rates 
raised serious doubts regarding the validity and viability of this model. 
Marketing gurus, such as Seth Godin went as far as declaring "interruption 
marketing" (=ads and banners) dead. 
The second approach is simpler and allows for the existence of non-
commercial content. 
It proposes to collect negligible sums (cents or fractions of cents) from 
every user for every visit ("micro-payments"). These accumulated cents will 
enable the site-owners to update and to maintain them and encourage 
entrepreneurs to develop new content and invest in it. Certain content 
aggregators (especially of digital textbooks) have adopted this model 
(Questia, Fathom). 
The adherents of the first school point to the 5 million USD invested in 
advertising during 1995 and to the 60 million or so invested during 1996. 
Its opponents point exactly at the same numbers : ridiculously small when 
contrasted with more conventional advertising modes. The potential of 
advertising on thenet is limited to 1.5 billion USD annually in 1998, 
thundered the pessimists. The actual figure was double the prediction but 
still woefully small and inadequate to support the internet's content 
development. Comparethese figures to the sale of Internet software (4 
billion), Internet hardware (3 billion), Internet access provision (4.2 
billion in 1995 alone!). 
Even if online advertising were to be restored to its erstwhile glory days, 
other bottlenecks remain. Advertising encourages the consumer to interact 
and to initiate the delivery of a product to him. This - the delivery phase 
- is a slow and enervating epilogue to the exciting affair of 
orderingonline. Too many consumers still complain of late delivery of the 
wrong ordefective products. 
The solution may lie in the integration of advertising and content. The 
late Pointcast, for instance, integrated advertising into its news 
broadcasts, continuously streamed to the user's screen, even when inactive 
(it had an active screen saver and ticker in a "push technology"). 
Downloading ofdigital music, video and text (e-books) leads to the 
immediate gratificationof consumers and increases the efficacy of 
advertising. 
Whatever the case may be, a uniform, agreed upon system of rating as a 
basis for charging advertisers, is sorely needed. There is also the 
question of what does the advertiser pay for?  The rates of many 
advertisers (Procter and Gamble, for instance) are based not on the number 
of hits or impressions (=entries, visits to a site). - but on the number of 
the times that their advertisement was hit (page views), or clicked 
through.  
.  
Finally, there is the paid subscription model - a flop to judge by the 
experience of the meagre number of sites of venerable and leading 
newspapers that are on a subscription basis. Dow Jones (Wall Street 
Journal) and TheEconomist. Only two. 
All this is not very promising. But one should never forget that the 
Internet is probably the closest thing we have to an efficient market. As 
consumers refuse to pay for content, investment will dry up and content 
will become scarce (through closures of web sites). As scarcity sets in, 
consumermay reconsider. 
Your article deals with the future of the Internet as a medium. Will it be 
able to support its content creation and distribution operations 
economically? 
If the Internet is a budding medium - then we should derive great benefit 
from a study of the history of its predecessors. 
The Future History of the Internet a a Medium 
The internet is simply the latest in a series of networks which 
revolutionized our lives. A century before the internet, the telegraph, the 
railways, the radio and the telephone have been similarly heralded as 
"global" and transforming.  Every medium of communications goes through the 
same evolutionary cycle: 
Anarchy 
The Public Phase 
At this stage, the medium and the resources attached to it are very cheap, 
accessible, under no regulatory constraints. The public sector steps in : 
higher educationinstitutions, religious institutions, government, not for 
profit organizations, non governmental organizations (NGOs), trade unions, 
etc. Bedeviled by limited financialresources, they regard the new medium as 
a cost effective way of disseminating their messages. 
The Internet was not exempt from this phase which ended only a few years 
ago. It started with a complete computer anarchy manifested in ad hoc 
networks, localnetworks, networks of organizations (mainly universities and 
organs of the government such as DARPA, a part of the defence 
establishment, in the USA). Non commercial entities jumped on the bandwagon 
and started sewing these networks together (an activity fully subsidized by 
government funds). Theresult was a globe encompassing network of academic 
institutions. The American Pentagon established the network of all 
networks, the ARPANET. Other government departments joined the fray, headed 
by the National ScienceFoundation (NSF) which withdrew only lately from the 
Internet. 
The Internet (with a different name) became semi-public property - with 
access granted to the chosen few. 
Radio took precisely this course. Radio transmissions started in the USA in 
1920. Those were anarchic broadcasts with no discernible regularity. Non 
commercialorganizations and not for profit organizations began their own 
broadcasts and even created radio broadcasting infrastructure (albeit of 
the cheap andlocal kind)dedicated to their audiences. Trade unions, certain 
educational institutionsand religious groups commenced "public radio" 
broadcasts. 
The Commercial Phase 
When the users (e.g., listeners in the case of the radio, or owners of PCs 
and modems in the case of the Internet) reach a critical mass - the 
business sector isalerted. In the name of capitalist ideology (another 
religion, really) itdemands "privatization" of the medium. This harps on 
very sensitive stringsin every Western soul: the efficient allocation of 
resources which is the result of competition. Corruption and inefficiency 
are intuitively associated with the public sector ("Other People's Money" - 
OPM). This, together with the ulterior motives of members of the ruling 
political echelons (the infamous American Paranoia), a lack of variety and 
of cateringto the tastes and interests of certain audiences and the 
automatic equationof private enterprise with democracy lead to a 
privatization of the youngmedium. 
The end result is the same: the private sector takes over the medium from 
"below" (makes offers to the owners or operators of the medium that they 
cannotpossibly refuse) - or from "above" (successful lobbying in the 
corridors of power leads to the appropriate legislation and the medium is 
"privatized"). Every privatization - especially that of a medium - provokes 
public opposition. There are (usually founded) suspicions that the 
interests of the public are compromised and sacrificed on the altar of 
commercialization and rating. Fears of monopolization and cartelization of 
the medium are evoked - and proven correct in due course. Otherwise, there 
is fear of the concentration of control of the medium in a few hands. All 
these things do happen - butthe pace is so slow that the initial fears are 
forgotten and publicattention reverts to fresher issues. 
A new Communications Act was enacted in the USA in 1934. It was meant to 
transform radio frequencies into a national resource to be sold to the 
private sectorwhich was supposed to use it to transmit radio signals to 
receivers. In other words : the radio was passed on to private and 
commercial hands. Public radio was doomed to be marginalized. 
The American administration withdrew from its last major involvement in the 
Internet in April 1995, when the NSF ceased to finance some of the networks 
and,thus, privatized its hitherto heavy involvement in the net. 
A new Communications Act was legislated in 1996. It permitted "organized 
anarchy". It allowed media operators to invade each other's territories. 
Phone companies were allowed to transmit video and cable companies were 
allowed to transmit telephony, for instance. This was all phased over a 
longperiod of time - still, it was a revolution whose magnitude is 
difficult to gauge and whose consequences defy imagination. It carries an 
equally momentous price tag - official censorship. "Voluntary censorship", 
to be sure, somewhat toothless standardization and enforcement authorities, 
to be sure - still, a censorship with its own institutions to boot. The 
private sector reacted by threatening litigation - but, beneath the surface 
it is caving in to pressure and temptation, constructing its own censorship 
codes both in the cable and in the internet media. 
Institutionalization 
This phase is the next in the Internet's history, though, it seems, few 
realize it. 
It is characterized by enhanced activities of legislation. Legislators, on 
all levels, discover the medium and lurch at it passionately. Resources 
which were considered"free", suddenly are transformed to "national 
treasures not to be dispensedwith cheaply, casually and with frivolity". 
It is conceivable that certain parts of the Internet will be "nationalized" 
(for instance, in the form of a licensing requirement) and tendered to the 
private sector. Legislation will be enacted which will deal with permitted 
and disallowed  
content (obscenity ? incitement ? racial or gender bias ?) No medium in the 
USA (not to mention the wide world) has eschewed such legislation. There 
are sure to be demands to allocate time (or space, or software, orcontent, 
or hardware) to "minorities", to "public affairs", to "communitybusiness". 
This is a tax that the business sector will have to pay to fendoff the 
eager legislator and his nuisance value. 
All this is bound to lead to a monopolization of hosts and servers. The 
important broadcast channels will diminish in number and be subjected to 
severe contentrestrictions. Sites which will refuse to succumb to these 
requirements - will be deleted or neutralized. Content guidelines 
(euphemism for censorship) exist, even as we write, in all major content 
providers (CompuServe, AOL, Yahoo!-Geocities, Tripod, Prodigy). 



The Bloodbath 
This is the phase of consolidation. The number of players is severely 
reduced. The number of browser types will settle on 2-3 (Netscape, 
Microsoft and Opera?). Networks will merge to form privately owned mega-
networks. Servers will merge to form hyper-servers run on supercomputers in 
"serverfarms". The number of ISPs will be considerably cut.  50 companies 
ruled the greater part of the media markets in the USA in 1983. The number 
in 1995 was 18. Atthe end of the century they will number 6. 
This is the stage when companies - fighting for financial survival - strive 
to acquire as many users/listeners/viewers as possible. The programming is 
shallowed to thelowest (and widest) common denominator. Shallow programming 
dominates aslong as the bloodbath proceeds. 
From Rags to Riches 
Tough competition produces four processes: 
     1. A Major Drop in Hardware Prices 
This happens in every medium but it doubly applies to a computer-dependent 
medium, such as the Internet.  
Computer technology seems to abide by "Moore's Law" which says that the 
number of transistors which can be put on a chip doubles every 18 months. 
As a result of this miniaturization, computing power quadruples every 18 
months and an exponential series ensues. Organic-biological-DNA computers, 
quantumcomputers, chaos computers - prompted by vast profits and spawned by 
inventive genius will ensure the continued applicability of Moore's Law. 
The Internet is also subject to "Metcalf's Law". 
It says that when we connect N computers to a network - we get an increase 
of N to the second power in its computing processing power. And these N 
computers are more powerful every year, according to Moore's Law. The 
growth of computing powers in networks is a multiple of the effects ofthe 
two laws. More and more computers with ever increasing computing 
powergetconnected and create an exponential 16 times growth in the 
network's computing power every 18 months. 
     2. Content related Fees 
This was prevalent in the Net until recently. Even potentially commercial 
software can still be downloaded for free. In many countries television 
viewers still pay for television broadcasts - but in the USA and many other 
countries in the West, the basic package of television channels comes 
freeof charge. 
As users / consumers form a habit of using (or consuming) the software - it 
is commercialized and begins to carry a price tag. This is what happened 
with the advent of cable television : contents are sold for subscription or 
per usage (Pay Per View - PPV) fees. 
Gradually, this is what will happen to most of the sites and software on 
the Net. Those which survive will begin to collect usage fees, access fees, 
subscription fees, downloading fees and other, appropriately named, fees. 
These fees are bound to be low - but it is the principle that counts. Even 
a few cents per transaction may accumulate to hefty sums with the traffic 
which characterizes some web sites on the Net (or, at least its more 
popular locales). 



     3. Increased User Friendliness 
As long as the computer is less user friendly and less reliable 
(predictable) than television - less of a black box - its potential (and 
its future) is limited. Televisionattracts 3.5 billion users daily. The 
Internet stands to attract - under the  
most exuberant scenario - less than one tenth of this number of people. The 
only reasons for this disparity are (the lack of) user friendliness and 
reliability. Even browsers, among the most user friendly applications ever 
-are not sufficiently so. The user still needs to know how to use a 
keyboardand must possess some basic acquaintance with the operating 
system.  Themore mature the medium, the more friendly it becomes. Finally, 
it will be operated using speech or common language. There will be room 
left for user"hunches" and built in flexible responses. 
     4. Social Taxes 
Sooner or later, the business sector has to mollify the God of public 
opinion with offerings of political and social nature. The Internet is an 
affluent, educated, yuppie medium. It requires literacy and numeracy, live 
interest in information and  
its various uses (scientific, commercial, other), a lot of resources (free 
time, money to invest in hardware, software and connect time). It empowers 
- andthus deepens the divide between the haves and have-nots, the developed 
andthe developing world, the knowing and the ignorant, the computer 
illiterate. 
In short: the Internet is an elitist medium. Publicly, this is an unhealthy 
posture. "Internetophobia" is already discernible. People (and politicians) 
talk about how unsafe the Internet is and about its possible uses 
forracial, sexist and pornographic purposes. The wider public is in a state 
ofawe. 
So, site builders and owners will do well to begin to improve their image: 
provide free access to schools and community centres, bankroll internet 
literacy classes, freely distribute contents and software to educational 
institutions, collaborate with researchers and social scientists and 
engineers. In short: encourage the view that the Internet is a medium 
catering to the needs ofthe community and the underprivileged, a mostly 
altruist endeavour. This also happens to make good business sense by 
educating and conditioning afuture generation of users. He who visited a 
site when a student, free of charge - will pay to do so when made an 
executive. Such a user will also pass on the information within andwithout 
his organization. This is called media exposure. The future will, nodoubt, 
will be witness to public Internet terminals, subsidized ISP accounts, free 
Internet classes and an alternative "non-commercial, public"approach to the 
Net. This may prove to be one more source of revenue to content creatorsand 
distributors.  
  
Jamaican Overdrive - LDC's and LCD's
By: Sam Vaknin
 
OverDrive - an e-commerce, software conversion and e-publishing 
applications leader - has just expanded an e-book technology centre by 
adding 200 e-book editors. This happened in Montego Bay, Jamaica - one of 
the less privileged spots on earth. The centre now provides a vertical e-
publishing service - from manuscript editing to conversion to Quark (for 
POD), Adobe, and MS Reader ebook formats. Thus, it is not confined to the 
classic sweatshop cum production centre so common in Less Developed 
Countries (LDC's). It is a full fledged operation with access to cutting 
edge technology. 
The Jamaican OverDrive is the harbinger of things to come and the outcome 
of a confluence of a few trends. 
First, there is the insatiable appetite big publishers (such as McGraw-
Hill, Random House, and Harper Collins) have developed to converting their 
hitherto inertial backlists into e-books. Gone are the days when e-books 
were perceived as merely a novel form of packaging. Publishers understood 
the cash potential this new distribution channel offers and the value added 
to stale print tomes in the conversion process. This epiphany is especially 
manifest in education and textbook publishing. 
Then there is the maturation of industry standards, readers and audiences. 
Both the supply side (title lists) and the demand side (readership) have 
increased. Giants like Microsoft have successfully entered the fray with 
new e-book reader applications, clearer fonts, and massive marketing. 
Retailers - such as Barnes and Noble - opened their gates to e-books. A 
host of independent publishers make good use of the negligible-cost 
distribution channel that the Internet is. Competition and positioning are 
already fierce - a good sign. 
The Internet used to be an English, affluent middle-class, white collar, 
male phenomenon. It has long lost these attributes. The digital divides 
that opened up with the early adoption of the Net by academe and business - 
are narrowing. Already there are more women than men users and English is 
the language of less than half of all web sites. The wireless Net will 
grant developing countries the chance to catch up. 
Astute entrepreneurs are bound to take advantage of the business-friendly 
profile of the manpower and investment-hungry governments of some 
developing countries. It is not uncommon to find a mastery of English, a 
college degree in the sciences, readiness to work outlandish hours at a 
fraction of wages in Germany or the USA - all combined in one employee in 
these deprived countries. India has sprouted a whole industry based on 
these competitive endowments. 
Here is how Steve Potash, OverDrive's CEO, explains his daring move in 
OverDrive's press release dated May 22, 2001: 
"Everyone we are partnering with in the US and worldwide has been very 
excited and delighted by the tremendous success and quality of eBook 
production from OverDrive Jamaica. Jamaica has tremendous untapped talent 
in its young people. Jamaica is the largest English-speaking nation in the 
Caribbean and their educational and technical programs provide us with a 
wealth of quality candidates for careers in electronic publishing. We could 
not have had this success without the support and responsiveness of the 
Jamaican government and its agencies. At every stage the agencies assisted 
us in opening our technology centre and staffing it with trained and 
competent eBook professionals. OverDrive Jamaica will be pioneering many of 
the advances for extending books, reference materials, textbooks, 
literature and journals into new digital channels - and will shortly become 
the foremost centre for eBook automation serving both US and international 
markets". 
Druanne Martin, OverDrive's Director of publishing services elaborates: 
""With Jamaica and Cleveland, Ohio sharing the same time zone (EST), we 
have our US and Jamaican production teams in sync. Jamaica provides a 
beautiful and warm climate, literally, for us to build long-term 
partnerships and to invite our publishing and content clients to come and 
visit their books in production". 
The Jamaican Minister of Industry, Commerce and Technology, the Hon. 
Phillip Paulwell reciprocates: 
"We are proud that OverDrive has selected Jamaica to extend its leadership 
in eBook technology. OverDrive is benefiting from the investments Jamaica 
has made in developing the needed infrastructure for IT companies to locate 
and build skilled workforces here." 
There is nothing new in outsourcing back office work (insurance claims 
processing, air ticket reservations, medical records maintenance) to third 
world countries, such as (the notable example) India. Research and 
Development is routinely farmed out to aspiring first world countries such 
as Israel and Ireland. But OverDrive's Jamaican facility is an example of 
something more sophisticated and more durable. Western firms are 
discovering the immense pools of skills, talent, innovation, and top notch 
scientific and other education often offered even by the poorest of 
nations. These multinationals entrust the locals now with more than 
keyboarding and responding to customer queries using fake names. The 
Jamaican venture is a business partnership. In a way, it is a topsy-turvy 
world. Digital animation is produced in India and consumed in the States. 
The low compensation of scientists attracts the technology and R&D arms of 
the likes of General Electric to Asia and Intel to Israel. In other words, 
there are budding signs of a reversing brain drain - from West to East. 
E-publishing is at the forefront of software engineering, e-consumerism, 
intellectual property technologies, payment systems, conversion 
applications, the mobile Internet, and, basically, every important trend in 
network and computing and digital content. Its migration to warmer and 
cheaper climates may be inevitable. OverDrive sounds happy enough. 
 
An Ambarrassment of Riches 
By: Sam Vaknin
 
http://www.doi.org/  
  
The Internet is too rich. Even powerful and sophisticated search engines, 
such as Google, return a lot of trash, dead ends, and Error 404's in 
response to the most well-defined query, Boolean operators and all. 
Directories created by human editors - such as Yahoo! or the Open Directory 
Project - are often overwhelmed by the amount of material out there. Like 
the legendary blob, the Internet is clearly out of classificatory control. 
Some web sites - like Suite101 - have introduced the old and tried Dewey 
subject classification system successfully used in non-virtual libraries 
for more than a century. Books - both print and electronic - (actually, 
their publishers) get assigned an ISBN (International Standard Book Number) 
by national agencies. Periodical publications (magazines, newsletters, 
bulletins) sport an ISSN (International Serial Standard Number). National 
libraries dole out CIP's (Cataloguing in Publication numbers), which help 
lesser outfits to catalogue the book upon arrival. But the emergence of new 
book formats, independent publishing, and self publishing has strained this 
already creaking system to its limits. In short: the whole thing is fast 
developing into an awful mess. 
Resolution is one solution. 
Resolution is the linking of identifiers to content. An identifier can be a 
word, or a phrase. RealNames implemented this approach and its proprietary 
software is now incorporated in most browsers. The user types a word, brand 
name, phrase, or code, and gets re-directed to a web site with the 
appropriate content. The only snag: RealNames identifiers are for sale. 
Thus, its identifiers are not guaranteed to lead to the best, only, or 
relevant resource. Similar systems are available in many languages. Nexet, 
for example, provides such a resolution service in Hebrew. 
The Association of American Publishers (APA) has an Enabling Technologies 
Committee. Fittingly, at the Frankfurt Book Fair of 1997, it announced the 
DOI (Digital Object Identifier) initiative. An International DOI Foundation 
(IDF) was set up and invited all publishers - American and non-American 
alike - to apply for a unique DOI prefix. DOI is actually a private case of 
a larger system of "handles" developed by the CNRI (Corporation for 
National Research Initiatives). Their "Handle Resolver" is a browser plug-
in software, which re-directs their handles to URL's or other pieces of 
data, or content. Without the Resolver, typing in the handle simply directs 
the user to a few proxy servers, which "understand" the handle protocols. 
The interesting (and new) feature of the system is its ability to resolve 
to MULTIPLE locations (URL's, or data, or content). The same identifier can 
resolve to a Universe of inter-related information (effectively, to a mini-
library). The content thus resolved need not be limited to text. Multiple 
resolution works with audio, images, and even video. 
The IDF's press release is worth some extensive quoting: 
"Imagine you're the manager of an Internet company reading a story online 
in the "Wall Street Journal" written by Stacey E. Bressler, a co-author of 
Communities of Commerce, and at the end of the story there is a link to 
purchase options for the book.  
Now imagine you are an online retailer, a syndicator or a reporter for an 
online news service and you are reading a review in "Publishers Weekly" 
about Communities of Commerce and you run across a link to related 
resources.  
And imagine you are in Buenos Aires, and in an online publication you 
encounter a link to "D-Lib Magazine", an electronic journal produced in 
Washington, D.C. which offers you locale-specific choices for downloading 
an article.  
The above examples demonstrate how multiple resolution can present you with 
a list of links from within an electronic document or page. The links 
beneath the labels - URLs and email addresses - would all be stored in the 
DOI System, and multiple resolution means any or all of those links can be 
displayed for you to select from in one menu. Any combination of links to 
related resources can be included in these menus.  
Capable of providing much richer experiences then single resolution to a 
URL, Multiple Resolution operates on the premise that content, not its 
location, is identified. In other words, where content and related 
resources reside is secondary information. Multiple Resolution enables 
content owners and distributors to identify their intellectual property 
with bound collections of related resources at a hyperlink's point of 
departure, instead of requiring a user to leave the page to go to a new 
location for further information.  
A content owner controls and manages all the related resources in each of 
these menus and can determine which information is accessible to each 
business partner within the supply chain. When an administrator changes any 
facet of this information, the change is simultaneous on all internal 
networks and the Internet. A DOI is a permanent identifier, analogous to a 
telephone number for life, so tomorrow and years from now a user can locate 
the product and related resources wherever they may have been moved or 
archived to." 
The IDF provides a limited, text-only, online demonstration. When sweeping 
with the cursor over a linked item, a pop-down menu of options is 
presented. These options are pre-defined and customized by the content 
creators and owners. In the first example above (book purchase options) the 
DOI resolves to retail outlets (categorized by book formats), information 
about the title and the author, digital rights management information 
(permissions), and more. The DOI server generates this information in "real 
time", "on the fly". But it is the author, or (more often) the publisher 
that choose the information, its modes of presentation, selections, and 
marketing and sales data. The ingenuity is in the fact that the DOI 
server's files and records can be updated, replaced, or deleted. It does 
not affect the resolution path - only the content resolved to. 
Which brings us to e-publishing. 
Second part of Embarrassment of Riches - here:
http://samvak.tripod.com/busiweb17.html



 
The Fall and Fall of the P-Zine  
   
By: Sam Vaknin 
http://home.wuliweb.com/index.shtml 
http://www.pshares.org/  
  
The circulation of print magazines has declined precipitously in the last 
24 months. This dissolution of subscriber bases has accelerated 
dramatically as economic recession set in. But a diminishing wealth effect 
is only partly to blame. The managements of printed periodicals - from 
dailies to quarterlies - failed miserably to grasp the Internet's potential 
and  potential threat. They were fooled by the lack of convenient and cheap 
e-reading devices into believing that old habits die hard. They do - but 
magazine reading is not habit forming. Readers' loyalties are fickle and 
shift according to content and price. The Web offers cornucopial and niche-
targeted content - free of charge or very cheaply. This is hard to beat and 
is getting harder by the day as natural selection among dot.bombs spares 
only quality content providers.  
Consider Ploughshares, the Literary Journal. 
It is a venerable, not for profit, print journal published by Emerson 
College, now marking its 30th anniversary. It recently inaugurated its web 
sibling. The project consumed three years and $125,000 (grant from the 
Wallace-Reader's Digest Funds). Every title Ploughshares has ever published 
was indexed (over 18,000 journal pages digitized). In all, the "website 
will offer free access to over 2,750 poems and short stories from past and 
current issues." 
The more than 2000 (!) authors ever published in Ploughshares will each 
maintain a personal web page comprising biographical notes, press releases, 
new books and events announcements and links to other web sites. This is 
the Yahoo! formula. Content generated by the authors will thus transform 
Ploughshares into a leading literary portal. 
 
But Ploughshares did not stop at this standard features. A "bookshelf" will 
link to book reviews contributed online (and augmented by the magazine's 
own prestigious offerings). An annotated bookstore is just a step away 
(though Ploughshares' web site does not include one hitherto). The next 
best thing is a rights-management application used by  the journal's 
authors to grant online publishing permissions for their work to third 
parties. 
 
No print literary magazine can beat this one stop shop. So, how can print 
publications defend themselves? 
By being creative and by not conceding defeat is how. 
Consider WuliWeb's example of thinking outside the printed box. 
It is a simple online application which enables its users to "send, save 
and share material from print publications". Participating magazines and 
newspapers print "WuliCodes" on their (physical) pages and WuliWeb 
subscribers barcode-scan, or manually enter them into their online "Content 
Manager" via keyboard, PDA, pager, cell phone, or fixed phone (using a 
PIN). The service is free (paid for by the magazine publishers and 
advertisers) and, according to WuliWeb, offers these advantages to its 
users: 
"Once you choose to use WuliWeb's free service, you will no longer have to 
laboriously "tear and share" print articles or ads that you want to archive 
or share with colleagues or friends. You will be able to store material 
sourced from print publications permanently in your own secure, electronic 
files, and you can share this material instantly with any number of people. 
Magazine and Newspaper Publishers will now have the ability to distribute 
their online content more widely and to offer a richer experience to their 
readers. Advertisers will be able to deploy dynamic and media-rich content 
to 
attract and convert customers, and will be able to communicate more 
completely with their customers." 
Links to the shared material are stored in WuliWeb's central database and 
users gain access to them by signing up for a (free) WuliWeb account. Thus, 
the user's mailbox is unencumbered by huge downloads. Moreover, WuliWeb 
allows for a keywords-based search of articles saved.  
Perhaps the only serious drawback is that WuliWeb provides its users only 
with LINKS to content stored on publishers' web sites. It is a directory 
service - not a full text database. This creates dependence. Links may get 
broken. Whole web sites vanish. Magazines and their publishers go under. 
All the more reason for publishers to adopt this service and make it their 
own.
 
 
The Internet and the Library   
   
By: Sam Vaknin
"In this digital age, the custodians of published works are at the center 
of a global copyright controversy that casts them as villains simply for 
doing their job: letting people borrow books for free." 
(ZDNet quoted by "Publisher's Lunch on July 13, 2001) 
It is amazing that the traditional archivists of human knowledge - the 
libraries - failed so spectacularly to ride the tiger of the Internet, that 
epitome and apex of knowledge creation and distribution. At first, 
libraries, the inertial repositories of printed matter, were overwhelmed by 
the rapid pace of technology and by the ephemeral and anarchic content it 
spawned. They were reduced to providing access to dull card catalogues and 
unimaginative collections of web links. The more daring added online 
exhibits and digitized collections. A typical library web site is still 
comprised of static representations of the library's physical assets and a 
few quasi-interactive services.  
This tendency - by both publishers and libraries - to inadequately and 
inappropriately pour old wine into new vessels is what caused the recent 
furor over e-books.  
 
The lending of e-books to patrons appears to be a natural extension of the 
classical role of libraries: physical book lending. Libraries sought also 
to extend their archival functions to e-books. But librarians failed to 
grasp the essential and substantive differences between the two formats. E-
books can be easily, stealthily, and cheaply copied, for instance. 
Copyright violations are a real and present danger with e-books. Moreover, 
e-books are not a tangible product. "Lending" an e-book - is tantamount to 
copying an e-book. In other words, e-books are not books at all. They are 
software products. Libraries have pioneered digital collections (as they 
have other information technologies throughout history) and are still the 
main promoters of e-publishing. But now they are at risk of becoming piracy 
portals.  
Solutions are, appropriately, being borrowed from the software industry. 
NetLibrary has lately granted multiple user licences to a university 
library system. Such licences allow for unlimited access and are priced 
according to the number of the library's patrons, or the number of its 
reading devices and terminals. Another possibility is to implement the 
shareware model - a trial period followed by a purchase option or an 
expiration, a-la Rosetta's expiring e-book.  
 
Distributor Baker & Taylor have unveiled at the recent ALA a prototype e-
book distribution system jointly developed  by ibooks and Digital Owl. It 
will be sold to libraries by B&T's Informata division and Reciprocal. 
 
The annual subscription for use of the digital library comprises "a catalog 
of digital content, brandable pages and web based tools for each 
participating library to customize for their patrons. Patrons of 
participating libraries will then be able to browse digital content online, 
or download and check out the content they are most interested in. Content 
may be checked out for an extended period of time set by each library, 
including checking out eBooks from home." Still, it seems that B&T's 
approach is heavily influenced by software licencing ("one copy one use"). 
 
But, there is an underlying, fundamental incompatibility between the 
Internet and the library. They are competitors. One vitiates the other. 
Free Internet access and e-book reading devices in libraries 
notwithstanding - the Internet, unless harnessed and integrated by 
libraries, threatens their very existence by depriving them of patrons. 
Libraries, in turn, threaten the budding software industry we, 
misleadingly, call "e-publishing".  
There are major operational and philosophical differences between physical 
and virtual libraries. The former are based on the tried and proven 
technology of print. The latter on the chaos we know as cyberspace and on 
user-averse technologies developed by geeks and nerds, rather than by 
marketers, users, and librarians. 
Physical libraries enjoy great advantages, not the least being their habit-
forming head start (2,500 years of first mover advantage). Libraries are 
hubs of social interaction and entertainment (the way cinemas used to be). 
Libraries have catered to users' reference needs in reference centres for 
centuries (and, lately, through Selective Dissemination of Information, or 
SDI). The war is by no means decided. "Progress" may yet consist of the 
assimilation of hi-tech gadgets by lo-tech libraries. It may turn out to be 
convergence at its best, as librarians become computer savvy - and computer 
types create knowledge and disseminate it.
 
 
A Brief History of the Book  
 
By: Sam Vaknin
"The free communication of thought and opinion is one of the most precious 
rights of man; every citizen may therefore speak, write and print freely." 
(French National Assembly, 1789) 
I. What is a Book? 
UNESCO's arbitrary and ungrounded definition of "book" is: 
""Non-periodical printed publication of at least 49 pages excluding 
covers". 
But a book, above all else, is a medium. It encapsulates information (of 
one kind or another) and conveys it across time and space. Moreover, as 
opposed to common opinion, it is - and has always been - a rigidly formal 
affair. Even the latest "innovations" are nothing but ancient wine in 
sparkling new bottles. 
Consider the scrolling protocol. Our eyes and brains are limited readers-
decoders. There is only that much that the eye can encompass and the brain 
interpret. Hence the need to segment data into cognitively digestible 
chunks. There are two forms of scrolling - lateral and vertical. The 
papyrus, the broadsheet newspaper, and the computer screen are three 
examples of the vertical scroll - from top to bottom or vice versa. The e-
book, the microfilm, the vellum, and the print book are instances of the 
lateral scroll - from left to right (or from right to left, in the Semitic 
languages).  
In many respects, audio books are much more revolutionary than e-books. 
They do not employ visual symbols (all other types of books do), or a 
straightforward scrolling method. E-books, on the other hand, are a 
throwback to the days of the papyrus.  The text cannot be opened at any 
point in a series of connected pages and the content is carried only on one 
side of the (electronic) "leaf". Parchment, by comparison, was multi-paged, 
easily browseable, and printed on both sides of the leaf. It led to a 
revolution in publishing and to the print book. All these advances are now 
being reversed by the e-book. Luckily, the e-book retains one innovation of 
the parchment - the hypertext. Early Jewish and Christian texts (as well as 
Roman legal scholarship) was written on parchment (and later printed) and 
included numerous inter-textual links. The Talmud, for example, is made of 
a main text (the Mishna) which hyperlinks on the same page to numerous 
interpretations (exegesis) offered by scholars throughout generations of 
Jewish learning.   
Another distinguishing feature of books is portability (or mobility). Books 
on papyrus, vellum, paper, or PDA - are all transportable. In other words, 
the replication of the book's message is achieved by passing it along and 
no loss is incurred thereby (i.e., there is no physical metamorphosis of 
the message). The book is like a perpetuum mobile. It spreads its content 
virally by being circulated and is not diminished or altered by it. 
Physically, it is eroded, of course - but it can be copied faithfully. It 
is permanent.  
Not so the e-book or the CD-ROM. Both are dependent on devices (readers or 
drives, respectively). Both are technology-specific and format-specific. 
Changes in technology - both in hardware and in software - are liable to 
render many e-books unreadable. And portability is hampered by battery 
life, lighting conditions, or the availability of appropriate 
infrastructure (e.g., of electricity).  
II. The Constant Content Revolution 
Every generation applies the same age-old principles to new "content-
containers". Every such transmutation yields a great surge in the creation 
of content and its dissemination. The incunabula (the first printed books) 
made knowledge accessible (sometimes in the vernacular) to scholars and 
laymen alike and liberated books from the scriptoria and "libraries" of 
monasteries. The printing press technology shattered the content monopoly. 
In 50 years (1450-1500), the number of books in Europe surged from a few 
thousand to more than 9 million! And, as McLuhan has noted, it shifted the 
emphasis from the oral mode of content distribution (i.e., "communication") 
to the visual mode. 
E-books are threatening to do the same. "Book ATMs" will provide Print on 
Demand (POD) services to faraway places. People in remote corners of the 
earth will be able to select from publishing backlists and front lists 
comprising millions of titles. Millions of authors are now able to realize 
their dream to have their work published cheaply and without editorial 
barriers to entry. The e-book is the Internet's prodigal son. The latter is 
the ideal distribution channel of the former. The monopoly of the big 
publishing houses on everything written - from romance to scholarly 
journals - is a thing of the past. In a way, it is ironic. Publishing, in 
its earliest forms, was a revolt against the writing (letters) monopoly of 
the priestly classes. It flourished in non-theocratic societies such as 
Rome, or China - and languished where religion reigned (such as in Sumeria, 
Egypt, the Islamic world, and Medieval Europe). 
With e-books, content will once more become a collaborative effort, as it 
has been well into the Middle Ages. Authors and audience used to interact 
(remember Socrates) to generate knowledge, information, and narratives. 
Interactive e-books, multimedia, discussion lists, and collective 
authorship efforts restore this great tradition. Moreover, as in the not so 
distant past, authors are yet again the publishers and sellers of their 
work. The distinctions between these functions is very recent. E-books and 
POD partially help to restore the pre-modern state of affairs. Up until the 
20th century, some books first appeared as a series of pamphlets (often 
published in daily papers or magazines) or were sold by subscription. 
Serialized e-books resort to these erstwhile marketing ploys. E-books may 
also help restore the balance between best-sellers and midlist authors and 
between fiction and textbooks. E-books are best suited to cater to niche 
markets, hitherto neglected by all major publishers. 



III. Literature for the Millions 
E-books are the quintessential "literature for the millions". They are 
cheaper than even paperbacks. John Bell (competing with Dr. Johnson) 
published "The Poets of Great Britain" in 1777-83. Each of the 109 volumes 
cost six shillings (compared to the usual guinea or more). The Railway 
Library of novels (1,300 volumes) costs 1 shilling apiece only eight 
decades later. The price continued to dive throughout the next century and 
a half. E-books and POD are likely to do unto paperbacks what these 
reprints did to originals. Some reprint libraries specialized in public 
domain works, very much like the bulk of e-book offering nowadays. 
The plunge in book prices, the lowering of barriers to entry due to new 
technologies and plentiful credit, the proliferation of publishers, and the 
cutthroat competition among booksellers was such that price regulation 
(cartel) had to be introduced. Net publisher prices, trade discounts, list 
prices were all anti-competitive inventions of the 19th century, mainly in 
Europe. They were accompanied by the rise of trade associations, publishers 
organizations, literary agents, author contracts, royalties agreements, 
mass marketing, and standardized copyrights.  
The sale of print books over the Internet can be conceptualized as the 
continuation of mail order catalogues by virtual means. But e-books are 
different. They are detrimental to all these cosy arrangements. Legally, an 
e-book may not be considered to constitute a "book" at all. Existing 
contracts between authors and publishers may not cover e-books. The serious 
price competition they offer to more traditional forms of publishing may 
end up pushing the whole industry to re-define itself. Rights may have to 
be re-assigned, revenues re-distributed, contractual relationships re-
thought. Moreover, e-books have hitherto been to print books what 
paperbacks are to hardcovers - re-formatted renditions. But more and more 
authors are publishing their books primarily or exclusively as e-books. E-
books thus threaten hardcovers and paperbacks alike. They are not merely a 
new format. They are a new mode of publishing. 
Every technological innovation was bitterly resisted by Luddite printers 
and publishers: stereotyping, the iron press, the application of steam 
power, mechanical typecasting and typesetting, new methods of reproducing 
illustrations, cloth bindings, machine-made paper, ready-bound books, 
paperbacks, book clubs, and book tokens. Without exception, they relented 
and adopted the new technologies to their considerable commercial 
advantage. It is no surprise, therefore, that publishers were hesitant to 
adopt the Internet, POD, and e-publishing technologies. The surprise lies 
in the relative haste with which they came to adopt it, egged on by authors 
and booksellers. 
IV. Intellectual Pirates and Intellectual Property 
Despite the technological breakthroughs that coalesced to form the modern 
printing press - printed books in the 17th and 18th centuries were derided 
by their contemporaries as inferior to their laboriously hand-made 
antecedents and to the incunabula. One is reminded of the current 
complaints about the new media (Internet, e-books), its shoddy workmanship, 
shabby appearance, and the rampant piracy. The first decades following the 
invention of the printing press, were, as the Encyclopedia Britannica puts 
it "a restless, highly competitive free for all ... (with) enormous 
vitality and variety (often leading to) careless work".  
There were egregious acts of piracy - for instance, the illicit copying of 
the Aldine Latin "pocket books", or the all-pervasive piracy in England in 
the 17th century (a direct result of over-regulation and coercive copyright 
monopolies). Shakespeare's work was published by notorious pirates and 
infringers of emerging intellectual property rights. Later, the American 
colonies became the world's centre of industrialized and systematic book 
piracy. Confronted with abundant and cheap pirated foreign books, local 
authors resorted to freelancing in magazines and lecture tours in a vain 
effort to make ends meet. 
Pirates and unlicenced - and, therefore, subversive - publishers were 
prosecuted under a variety of monopoly and libel laws (and, later, under 
national security and obscenity laws). There was little or no difference 
between royal and "democratic" governments. They all acted ruthlessly to 
preserve their control of publishing. John Milton wrote his passionate plea 
against censorship, Areopagitica, in response to the 1643 licencing 
ordinance passed by Parliament. The revolutionary Copyright Act of 1709 in 
England established the rights of authors and publishers to reap the 
commercial fruits of their endeavours exclusively, though only for a 
prescribed period of time. 
V. As Readership Expanded 
The battle between industrial-commercial publishers (fortified by ever more 
potent technologies) and the arts and craftsmanship crowd never ceased and 
it is raging now as fiercely as ever in numerous discussion lists, fora, 
tomes, and conferences. William Morris started the "private press" movement 
in England in the 19th century to counter what he regarded as the callous 
commercialization of book publishing. When the printing press was invented, 
it was put to commercial use by private entrepreneurs (traders) of the day. 
Established "publishers" (monasteries), with a few exceptions (e.g., in 
Augsburg, Germany and in Subiaco, Italy) shunned it and regarded it as a 
major threat to culture and civilization. Their attacks on printing read 
like the litanies against self-publishing or corporate-controlled 
publishing today.  
But, as readership expanded (women and the poor became increasingly 
literate), market forces reacted. The number of publishers multiplied 
relentlessly. At the beginning of the 19th century, innovative lithographic 
and offset processes allowed publishers in the West to add illustrations 
(at first, black and white and then in color), tables, detailed maps and 
anatomical charts, and other graphics to their books. Battles fought 
between publishers-librarians over formats (book sizes) and fonts (Gothic 
versus Roman) were ultimately decided by consumer preferences. Multimedia 
was born. The e-book will, probably, undergo a similar transition from 
being the static digital rendition of a print edition - to being a lively, 
colorful, interactive and commercially enabled creature.  
The commercial lending library and, later, the free library were two 
additional reactions to increasing demand. As early as the 18th century, 
publishers and booksellers expressed the fear that libraries will 
cannibalize their trade. Two centuries of accumulated experience 
demonstrate that the opposite has happened. Libraries have enhanced book 
sales and have become a major market in their own right. 
VI. The State of Subversion 
Publishing has always been a social pursuit and depended heavily on social 
developments, such as the spread of literacy and the liberation of 
minorities (especially, of women). As every new format matures, it is 
subjected to regulation from within and from without. E-books (and, by 
extension, digital content on the Web) will be no exception. Hence the 
recurrent and current attempts at regulation.  
Every new variant of content packaging was labeled as "dangerous" at its 
inception. The Church (formerly the largest publisher of bibles and other 
religious and "earthly" texts and the upholder and protector of reading in 
the Dark Ages) castigated and censored the printing of "heretical" books 
(especially the vernacular bibles of the Reformation) and restored the 
Inquisition for the specific purpose of controlling book publishing. In 
1559, it published the Index Librorum Prohibitorum ("Index of Prohibited 
Books"). A few (mainly Dutch) publishers even went to the stake (a habit 
worth reviving, some current authors would say...). European rulers issued 
proclamations against "naughty printed books" (of heresy and sedition). The 
printing of books was subject to licencing by the Privy Council in England. 
The very concept of copyright arose out of the forced registration of books 
in the register of the English Stationer's Company (a royal instrument of 
influence and intrigue). Such obligatory registration granted the publisher 
the right to exclusively copy the registered book (often, a class of books) 
for a number of years - but politically restricted printable content, often 
by force. Freedom of the press and free speech are still distant dreams in 
many corners of the earth. The Digital Millennium Copyright Act (DMCA), the 
V-chip and other privacy invading, dissemination inhibiting, and censorship 
imposing measures perpetuate a veteran if not so venerable tradition.  



VII. The More it Changes 
The more it changes, the more it stays the same. If the history of the book 
teaches us anything it is that there are no limits to the ingenuity with 
which publishers, authors, and booksellers, re-invent old practices. 
Technological and marketing innovations are invariably perceived as threats 
- only to be adopted later as articles of faith. Publishing faces the same 
issues and challenges it faced five hundred years ago and responds to them 
in much the same way. Yet, every generation believes its experiences to be 
unique and unprecedented. It is this denial of the past that casts a shadow 
over the future. Books have been with us since the dawn of civilization, 
millennia ago. In many ways, books constitute our civilization. Their 
traits are its traits: resilience, adaptation, flexibility, self re-
invention, wealth, communication. We would do well to accept that our most 
familiar artifacts - books - will never cease to amaze us. 
 
The Affair of the Vanishing Content   
 
By: Sam Vaknin
http://www.archive.org/ 
"Digitized information, especially on the Internet, has such rapid turnover 
these days that total loss is the norm. Civilization is developing severe 
amnesia as a result; indeed it may have become too amnesiac already to 
notice the problem properly."
(Stewart Brand, President, The Long Now Foundation )
Thousands of articles and essays posted by hundreds of authors were lost 
forever when themestream.com surprisingly shut its virtual gates. A sizable 
portion of the 1960 census, recorded on UNIVAC II-A tapes, is now 
inaccessible. Web hosts crash daily, erasing in the process valuable 
content. Access to web sites is often suspended - or blocked altogether - 
because of a real (or imagined) violation by the webmaster of the host's 
Terms of Service (TOS). Millions of other web sites - the results of 
collective, multi-annual, transcontinental efforts - contain unique stores 
of information in the form of databases, articles, discussion threads, and 
links to other web sites. Consider "Central Europe Review". Its archives 
comprise more than 2500 articles and essays about every conceivable aspect 
of Central and Eastern Europe and the Balkan. It is one of countless such 
collections.
Similar and much larger treasures have perished since the dawn of the 
digital age in the 1920's. Very few early radio and TV programs have 
survived, for instance. The current "digital dark age" can be compared only 
to the one which followed the torching of the Library of Alexandria. The 
more accessible and abundant the information available to us - the more 
devalued and common it becomes and the less institutional and cultural 
memory we seem to possess. In the battle between paper and screen, the 
former has won formidably. Newspaper archives, dating back to the 1700's 
are now being digitized - testifying to the endurance, resilience, and 
longevity of paper.
Enter the "Internet Libraries", or Digital Archival Repositories (DAR). 
These are libraries that provide free access to  digital materials 
replicated across multiple servers ("safety in redundancy"). They contain 
Web pages, television programming, films, e-books, archives of discussion 
lists, etc. Such materials can help linguists trace the development of 
language, journalists conduct research, scholars compare notes, students 
learn, and teachers teach. The Internet's evolution mirrors closely the 
social and cultural history of North America at the end of the 20th 
century. If not preserved, our understanding of who we are and where we are 
going will be severely hampered. The clues to our future lie ensconced in 
our past. It is the only guarantee against repeating the mistakes of our 
predecessors. Long gone Web pages cached by the likes of Google and Alexa 
constitute the first tier of such archival undertaking. 
The Stanford Archival Vault (SAV) in Stanford University assigns a 
numerical handle to every digital "object" (record) in a repository. The 
handle is the clever numerical result of a mathematical formula whose input 
is the number of information bits in the original object being deposited. 
This allows to track and uniquely identify records across multiple 
repositories. It also prevents tampering. SAV also offers application 
layers. These allow programmers to develop digital archive software and 
permit users to change the "view" (the interface) of an archive and thus to 
mine data. Its "reliability layer" verifies the completeness and accuracy 
of digital repositories.
The Internet Archive, a leading digital depository, in its own words:
"...is working to prevent the Internet  a new medium with major historical 
significance  and other "born-digital" materials from disappearing into 
the past. Collaborating with institutions including the Library of Congress 
and the Smithsonian, we are working to permanently preserve a record of 
public material."
Data storage is the first phase. It is not as simple as it sounds. The 
proliferation of formats of digital content has made it necessary to 
develop a standard for archiving Internet objects. The size of the 
digitized collections must pose a serious challenge as far as timely 
retrieval is concerned. Interoperability issues (numerous formats and 
readers) probably requires software and hardware plug-ins to render a 
smooth and transparent user interface.
Moreover, as time passes, digital data, stored on magnetic media, tend to 
deteriorate. It must be copied to newer media every 10 years or so 
("migration"). Advances in hardware and software applications render many 
of the digital records indecipherable (try reading your word processing 
files from 1981, stored on 5.25" floppies!). Special emulators of older 
hardware and software must be used to decode ancient data files. And, to 
ameliorate the impact of inevitable natural disasters, accidents, 
bankruptcies of publishers, and politically motivated destruction of data - 
multiple copies and redundant systems and archives must be maintained. As 
time passes, data formatting "dictionaries" will be needed. Data 
preservation is hardly useful if the data cannot be searched, retrieved, 
extracted, and researched. And, as "The Economist" put it ("The Economist 
Technology Quarterly, September 22nd, 2001), without a "Rosetta Stone" of 
data formats, future deciphering of stored the data might prove to be an 
insurmountable obstacle.
Last, but by no means least, Internet libraries are Internet based. They 
themselves are as ephemeral as the historical record they aim to preserve. 
This tenuous cyber existence goes a long way towards explaining why our 
paperless offices consume much more paper than ever before. 

Revolt of the Poor - The Demise of Intellectual Property
By: Sam Vaknin
Three years ago I published a book of short stories in Israel. The 
publishing house belongs to Israel's leading (and exceedingly wealthy) 
newspaper. I signed a contract which stated that I am entitled to receive 
8% of the income from the sales of the book after commissions payable to 
distributors, shops, etc. A few months later (1997), I won the coveted 
Prize of the Ministry of Education (for short prose). The prize money (a 
few thousand DMs) was snatched by the publishing house on the legal grounds 
that all the money generated by the book belongs to them because they own 
the copyright. 
In the mythology generated by capitalism to pacify the masses, the myth of 
intellectual property stands out. It goes like this : if the rights to 
intellectual property were not defined and enforced, commercial 
entrepreneurs would not have taken on the risks associated with publishing 
books, recording records, and preparing multimedia products. As a result, 
creative people will have suffered because they will have found no way to 
make their works accessible to the public. Ultimately, it is the public 
which pays the price of piracy, goes the refrain. 
But this is factually untrue. In the USA there is a very limited group of 
authors who actually live by their pen. Only select musicians eke out a 
living from their noisy vocation (most of them rock stars who own their 
labels - George Michael had to fight Sony to do just that) and very few 
actors come close to deriving subsistence level income from their 
profession. All these can no longer be thought of as mostly creative 
people. Forced to defend their intellectual property rights and the 
interests of Big Money, Madonna, Michael Jackson, Schwarzenegger and 
Grisham are businessmen at least as much as they are artists. 
Economically and rationally, we should expect that the costlier a work of 
art is to produce and the narrower its market - the more emphasized its 
intellectual property rights. 
Consider a publishing house. 
A book which costs 50,000 DM to produce with a potential audience of 1000 
purchasers (certain academic texts are like this) - would have to be priced 
at a a minimum of 100 DM to recoup only the direct costs. If illegally 
copied (thereby shrinking the potential market as some people will prefer 
to buy the cheaper illegal copies) - its price would have to go up 
prohibitively to recoup costs, thus driving out potential buyers. The story 
is different if a book costs 10,000 DM to produce and is priced at 20 DM a 
copy with a potential readership of 1,000,000 readers. Piracy (illegal 
copying) should in this case be more readily tolerated as a marginal 
phenomenon. 
This is the theory. But the facts are tellingly different. The less the 
cost of production (brought down by digital technologies) - the fiercer the 
battle against piracy. The bigger the market - the more pressure is applied 
to clamp down on samizdat entrepreneurs. 
Governments, from China to Macedonia, are introducing intellectual property 
laws (under pressure from rich world countries) and enforcing them 
belatedly. But where one factory is closed on shore (as has been the case 
in mainland China) - two sprout off shore (as is the case in Hong Kong and 
in Bulgaria). 
But this defies logic : the market today is global, the costs of production 
are lower (with the exception of the music and film industries), the 
marketing channels more numerous (half of the income of movie studios 
emanates from video cassette sales), the speedy recouping of the investment 
virtually guaranteed. Moreover, piracy thrives in very poor markets in 
which the population would anyhow not have paid the legal price. The 
illegal product is inferior to the legal copy (it comes with no literature, 
warranties or support). So why should the big manufacturers, publishing 
houses, record companies, software companies and fashion houses worry? 
The answer lurks in history. Intellectual property is a relatively new 
notion. In the near past, no one considered knowledge or the fruits of 
creativity (art, design) as 'patentable', or as someone's 'property'. The 
artist was but a mere channel through which divine grace flowed. Texts, 
discoveries, inventions, works of art and music, designs - all belonged to 
the community and could be replicated freely. True, the chosen ones, the 
conduits, were honoured but were rarely financially rewarded. They were 
commissioned to produce their works of art and were salaried, in most 
cases. Only with the advent of the Industrial Revolution were the embryonic 
precursors of intellectual property introduced but they were still limited 
to industrial designs and processes, mainly as embedded in machinery. The 
patent was born. The more massive the market, the more sophisticated the 
sales and marketing techniques, the bigger the financial stakes - the 
larger loomed the issue of intellectual property. It spread from machinery 
to designs, processes, books, newspapers, any printed matter, works of art 
and music, films (which, at their beginning were not considered art), 
software, software embedded in hardware, processes, business methods, and 
even unto genetic material. 
Intellectual property rights - despite their noble title - are less about 
the intellect and more about property. This is Big Money : the markets in 
intellectual property outweigh the total industrial production in the 
world. The aim is to secure a monopoly on a specific work. This is an 
especially grave matter in academic publishing where small- circulation 
magazines do not allow their content to be quoted or published even for 
non-commercial purposes. The monopolists of knowledge and intellectual 
products cannot allow competition anywhere in the world - because theirs is 
a world market. A pirate in Skopje is in direct competition with Bill 
Gates. When he sells a pirated Microsoft product - he is depriving 
Microsoft not only of its income, but of a client (=future income), of its 
monopolistic status (cheap copies can be smuggled into other markets), and 
of its competition-deterring image (a major monopoly preserving asset). 
This is a threat which Microsoft cannot tolerate. Hence its efforts to 
eradicate piracy - successful in China and an utter failure in legally-
relaxed Russia. 
But what Microsoft fails to understand is that the problem lies with its 
pricing policy - not with the pirates. When faced with a global 
marketplace, a company can adopt one of two policies: either to adjust the 
price of its products to a world average of purchasing power - or to use 
discretionary differential pricing (as pharmaceutical companies were forced 
to do in Brazil and South Africa). A Macedonian with an average monthly 
income of 160 USD clearly cannot afford to buy the Encyclopaedia Encarta 
Deluxe. In America, 50 USD is the income generated in 4 hours of an average 
job. In Macedonian terms, therefore, the Encarta is 20 times more 
expensive. Either the price should be lowered in the Macedonian market - or 
an average world price should be fixed which will reflect an average global 
purchasing power. 
Something must be done about it not only from the economic point of view. 
Intellectual products are very price sensitive and highly elastic. Lower 
prices will be more than compensated for by a much higher sales volume. 
There is no other way to explain the pirate industries : evidently, at the 
right price a lot of people are willing to buy these products. High prices 
are an implicit trade-off favouring small, elite, select, rich world 
clientele. This raises a moral issue : are the children of Macedonia less 
worthy of education and access to the latest in human knowledge and 
creation ? 
Two developments threaten the future of intellectual property rights. One 
is the Internet. Academics, fed up with the monopolistic practices of 
professional publications - already publish on the web in big numbers. I 
published a few book on the Internet and they can be freely downloaded by 
anyone who has a computer or a modem. The full text of electronic 
magazines, trade journals, billboards, professional publications, and 
thousands of books is available online. Hackers even made sites available 
from which it is possible to download whole software and multimedia 
products. It is very easy and cheap to publish on the Internet, the 
barriers to entry are virtually nil. Web pages are hosted free of charge, 
and authoring and publishing software tools are incorporated in most word 
processors and browser applications. As the Internet acquires more 
impressive sound and video capabilities it will proceed to threaten the 
monopoly of the record companies, the movie studios and so on. 
The second development is also technological. The oft-vindicated Moore's 
law predicts the doubling of computer memory capacity every 18 months. But 
memory is only one aspect of computing power. Another is the rapid 
simultaneous advance on all technological fronts. Miniaturization and 
concurrent empowerment by software tools have made it possible for 
individuals to emulate much larger scale organizations successfully. A 
single person, sitting at home with 5000 USD worth of equipment can fully 
compete with the best products of the best printing houses anywhere. CD-
ROMs can be written on, stamped and copied in house. A complete music 
studio with the latest in digital technology has been condensed to the 
dimensions of a single chip. This will lead to personal publishing, 
personal music recording, and the to the digitization of plastic art. But 
this is only one side of the story. 
The relative advantage of the intellectual property corporation does not 
consist exclusively in its technological prowess. Rather it lies in its 
vast pool of capital, its marketing clout, market positioning, sales 
organization, and distribution network. 
Nowadays, anyone can print a visually impressive book, using the above-
mentioned cheap equipment. But in an age of information glut, it is the 
marketing, the media campaign, the distribution, and the sales that 
determine the economic outcome. 
This advantage, however, is also being eroded. 
First, there is a psychological shift, a reaction to the commercialization 
of intellect and spirit. Creative people are repelled by what they regard 
as an oligarchic establishment of institutionalized, lowest common 
denominator art and they are fighting back. 
Secondly, the Internet is a huge (200 million people), truly cosmopolitan 
market, with its own marketing channels freely available to all. Even by 
default, with a minimum investment, the likelihood of being seen by 
surprisingly large numbers of consumers is high.
I published one book the traditional way - and another on the Internet. In 
50 months, I have received 6500 written responses regarding my electronic 
book. Well over 500,000 people read it (my Link Exchange meter registered 
c. 2,000,000 impressions since November 1998). It is a textbook (in 
psychopathology) - and 500,000 readers is a lot for this kind of 
publication. I am so satisfied that I am not sure that I will ever consider 
a traditional publisher again. Indeed, my last book was published in the 
very same way. 
The demise of intellectual property has lately become abundantly clear. The 
old intellectual property industries are fighting tooth and nail to 
preserve their monopolies (patents, trademarks, copyright) and their cost 
advantages in manufacturing and marketing. 
But they are faced with three inexorable processes which are likely to 
render their efforts vain:
The Newspaper Packaging 
Print newspapers offer package deals of cheap content subsidized by 
advertising. In other words, the advertisers pay for content formation and 
generation and the reader has no choice but be exposed to commercial 
messages as he or she studies the content. 
This model - adopted earlier by radio and television - rules the internet 
now and will rule the wireless internet in the future. Content will be made 
available free of all pecuniary charges. The consumer will pay by providing 
his personal data (demographic data, consumption patterns and preferences 
and so on) and by being exposed to advertising. Subscription based models 
are bound to fail. 
Thus, content creators will benefit only by sharing in the advertising 
cake. They will find it increasingly difficult to implement the old models 
of royalties paid for access or of ownership of intellectual property.
Disintermediation 
A lot of ink has been spilt regarding this important trend. The removal of 
layers of brokering and intermediation - mainly on the manufacturing and 
marketing levels - is a historic development (though the continuation of a 
long term trend). 
Consider music for instance. Streaming audio on the internet or 
downloadable MP3 files will render the CD obsolete. The internet also 
provides a venue for the marketing of niche products and reduces the 
barriers to entry previously imposed by the need to engage in costly 
marketing ("branding") campaigns and manufacturing activities. 
This trend is also likely to restore the balance between artist and the 
commercial exploiters of his product. The very definition of "artist" will 
expand to include all creative people. One will seek to distinguish 
oneself, to "brand" oneself and to auction off one's services, ideas, 
products, designs, experience, etc. This is a return to pre-industrial 
times when artisans ruled the economic scene. Work stability will vanish 
and work mobility will increase in a landscape of shifting allegiances, 
head hunting, remote collaboration and similar labour market trends.
Market Fragmentation 
In a fragmented market with a myriad of mutually exclusive market niches, 
consumer preferences and marketing and sales channels - economies of scale 
in manufacturing and distribution are meaningless. Narrowcasting replaces 
broadcasting, mass customization replaces mass production, a network of 
shifting affiliations replaces the rigid owned-branch system. The 
decentralized, intrapreneurship-based corporation is a late response to 
these trends. The mega-corporation of the future is more likely to act as a 
collective of start-ups than as a homogeneous, uniform (and, to conspiracy 
theorists, sinister) juggernaut it once was. 

The Territorial Web
By: Sam Vaknin
 
The Net was supposed to dissolve anachronistic national borders and 
cultural boundaries. It was expected to vitiate distance - both physical 
and mental. It was hailed as the invention that will unify Mankind and 
harmonize (though not homogenize) civilizations, east and west.
Yet, this was not to be. As dot.coms bombed, their more veteran and more 
experienced brick and mortar rivals took over the Net, transforming it in 
the process into a giant content delivery, marketing, supply chain 
management, and customer relationship management platform. This evolution 
all but demolished the non-local nature of the early Internet. It has also 
brought it into the remit of existing national laws.
Moreover, governments throughout the world have become more assertive in 
exercising territorial jurisdiction over the hitherto ostensibly 
extraterritorial Net. A French court has prohibited Yahoo! from making 
certain content on its Web sites available to French citizens. An American 
court advised Yahoo! to ignore this decision. A Russian programmer was 
arrested by the FBI for offering a decryption software for sale in Russia 
(where it is perfectly legal). Governments from China to Saudi Arabia 
filter Web content regularly. Following the September 11 attacks, 
restrictive anti-terrorist legislation the world over targeted cyberspace.
But the real territorialization of the Internet - the redrawing of its 
internal contours and the withdrawal of its libertarian foundations - is 
more pernicious, all-pervasive, quotidian, and surreptitiously gradual. 
This is not the outcome of legal revolutions and court-driven evolution. It 
is piecemeal, quiet, unnoticed, often inadvertent and unintended. It is an 
"afterthought" rather than a premeditated "plot". It happens e-tailer by e-
tailer, one Web site after the other, like the spread of a virus.
Consider these two - by no means exhaustive - examples. 
Amazon and Geocities (now, Yahoo!Geocities) are two Internet 
establishments, two gigantic communities of users that, between them, 
represent a sizable chunk of all the activity on the Internet. 
It has long been impossible for a non-US publisher to sell its wares 
(books, for instance) through Amazon or to Amazon directly. Amazon works 
exclusively with US publishers and distributors. To collaborate with Amazon 
- one of the members of a duopoly as far as B2C e-commerce goes - a non-US 
publisher (no matter how substantial) has to work with a US distributor and 
thus forgo a large portion of its revenues (payable to the distributor as 
commissions). Moreover, said publisher cannot even open a ZShop (Amazon's 
version of mom and pop store). One has to be a US resident to do so. Amazon 
is closed to the outside world, despite its (false) global image. It sells 
all over the world - but it only buys American.
This discriminatory behaviour is partly profit-motivated. It is 
logistically easier and cheaper to deal only with US businesses. But Barnes 
and Noble works directly with foreign publishers and they preceded Amazon 
in the book business by decades. 
Yahoo!Geocities has lately instituted a new policy. It limits the size of 
downloads from the free home pages of members of its community. If the 
downloaded content from a given home page exceeds 3 Gb (extrapolated based 
on hourly usage) - the "offending" member's page is shut down for an hour. 
The member is then prompted to pay a monthly subscription fee for a Premium 
Service in order avoid a recurrence of this unfortunate event. This 
"marketing drive" is intended to compensate Yahoo!Geocities for a 
precipitous drop in online advertising revenues.
The "Premium" package includes "Premium Mail". But only US citizens or 
residents can subscribe to it. And, you guessed it right, without the 
Premium Mail component, one cannot complete the subscription process. 
Though not stated explicitly anywhere, the Premium services are closed to 
the outside world and are the exclusive reserve of Americans. One can get 
around this virtual ethnic cleansing by providing false data while 
registering, but this is besides the point.
The Internet is a reflection of the outside world. As economies contract, 
unemployment soars, personal safety vanishes, the social fabric 
disintegrates, and consumption slumps - countries tend to isolate 
themselves politically, react aggressively, and protect their national 
economies. Protectionism, unilateralism, and isolationism are scourges the 
Internet was supposed to be immune to. Little did we know.
The In-Credible Web
By: Sam Vaknin 
http://www.webcredibility.org/ 
 
People are conditioned to trust written words, not to mention images. "I 
read it in the paper" or "As seen on TV" are worn out but still effective 
clichs. The Internet combines both the written and the seen. It is both a 
textual and a visual (and audio) medium. Do people trust Internet content? 
Is the incredible Internet - credible?
In the "brick and mortar" world, credibility is associated with brands. A 
brand, in effect, guarantees the quality and specifications of a product 
(think McDonald's hamburgers), its performance (think Palm), level of 
service and commitment to customer care (Amazon), variety, or price (Wal-
Mart). Brands are sustained and enhanced by advertising campaigns. The 
content or sales pitch of specific ads are often less important than the 
message conveyed by the very existence of a campaign: "This company is rich 
enough (read: stable, reliable, trustworthy, here to stay) to spend 
millions on advertising".  
The Internet has very few brands (Yahoo!, Amazon) - and some of them are 
tarnished. Some "old media" brands have entered the fray (Barnes and Noble, 
The Wall Street Journal, the Britannica) - hitherto without much success. 
The overwhelming bulk of Web content is created or disseminated by small 
time entrepreneurs and monomaniacs. 
So, how does one establish or acquire credibility in such a diffuse and 
anarchic medium?
Enter Stanford University's "Web Credibility Project".
They define themselves thus:
"Our goal is to understand what leads people to believe what they find on 
the Web. We hope this knowledge will enhance Web site design and promote 
future research on Web credibility. As part of this ongoing project we are:
?	Performing quantitative research on Web credibility. 
?	Collecting all public information on Web credibility. 
?	Acting as a clearinghouse for this information. 
?	Facilitating research and discussion about Web credibility. 
?	Helping designers create credible Web sites." 
Examples of current projects:
 
Timeliness: How does having out-of-date content affect the credibility 
of a Web site?
 
Interaction: How does having a personalized interaction with a Web site 
affect its credibility?
 
Negative Content: How does displaying negative content associated with a 
branded web site affect the credibility of the brand?
It is useful to confine ourselves to this definition of trust:
"The subjective belief, perception, or conviction that information provided 
is true, factual, and objective, and that commitments undertaken, 
explicitly, or implicitly, will be honoured fully and in a timely manner".
Such perception, belief, or conviction are based on:
?	Past experience in general (with spam, with merchants, or providers, 
with a similar product category, with the same type of content, etc.) 
and personal proclivity to trust or to distrust 
?	Experience with the specific merchant or provider (whether personal 
or gleaned from other people's feedback - reviews, complaints, and 
opinions) 
There is little that a merchant can do about the former. The latter is, 
expectedly, influenced by:
?	Professionalism (as evident in Web site design, e-commerce 
facilities, user-friendliness, navigability, links to other relevant 
Web pages, links from other Web sites, ease and speed of download, 
updated content, proofreading, domain name which matches the 
company's name, availability, multilingualism, etc.) 
?	Trustworthiness (lack of bias, good intentions, truthfulness, 
thoroughness, objectivity, expertise and author credentials, 
knowledgeable sources and treatment, citations and bibliography), and 
what the authors of the research call "Real World Feel" (physical 
address, phone/fax numbers, non-Web e-mail address, photos of 
facilities and staff, audio recording, ownership by a not for profit 
organization, URL ending with ORG). 
?	Commercial Web sites are less trusted. Cluttered ads, paid 
subscriptions, e-commerce enabled forms - all reduce the site's 
credibility! This is especially true if the entire site is a one, big 
ad and when it is hard to distinguish ads from  content. 
?	Track record (how veteran is the merchant, past financial 
performance, credit history, brand name recognition, lists of 
customers, etc.) 
?	Selection (how many products are carried, how often is inventory 
refreshed, etc.) 
?	Advertising (is the company's business sufficiently lucrative to 
support a campaign?) 
?	Service (good service indicates a reassuring readiness to sacrifice 
the bottom line to cater to the customer's legitimate concerns, 
feedback forms, live support, etc.) 
?	Full disclosure of rates, prices, privacy policy, security issues, 
etc. 
?	Feedback from other users (opinions, reviews, comments, FAQs, support 
groups, etc.) 
?	Site rating and certification by trustworthy agencies (like the 
Better Business Bureau - BBB, VeriSign, TRUSTe) - or awards won (from 
credible and reputable organizations). Links from other, well-known 
and believable Web sites. 
The Credibility Web discovered that trust in e-commerce is also influenced 
by idiosyncratic factors. Certain domain names (org) are more trusted than 
others (com). Too many ads, broken links, typos, outdated or old content - 
all diminish trust. In the absence of proven markers and behavioral 
guidelines, people seem to resort to extrapolation ("if they can't maintain 
their own Web site ...") and stereotypes (e.g., NGO's are more trustworthy 
than corporations).
As Web sites proliferate (Google indexes well over 3 billion now) and Web 
authoring becomes a routine task - the noise to signal ratio of garbage to 
useful information is bound to deteriorate. Search engines already 
incorporate crude measures of credibility in their rankings (e.g., the 
number of links from external Web sites). But, to remain useful, search 
engines (and Web directories) would do well to rate Web content more 
comprehensively and thoroughly. They should rank Web sites by  
authoritativeness, reliability, and objectivity, for instance. 
Research shows that 75% of all respondents resort to the Internet as a 
primary information provider. The inundation of irrelevant material caused 
most surfers to confine their surfing to 10 Web sites (the equivalent of 
"anchors" in shopping malls), which they deem reliable, timely, accurate, 
objective, authoritative, and credible. The rest of the Internet gets the 
leftovers.  This worrying trend can be reversed only through the emergence 
of independent and commercially-viable rating agencies. Web sites (at least 
the business ones) should be willing to pay for credible rating to enhance 
their stickiness and attract monetizable "eyeballs". In the absence of such 
third party accreditation, the Internet risks both irrelevance and 
disrepute.




WEB TECHNOLOGIES AND TRENDS
Bright Planet, Deep Web 
By: Sam Vaknin
 
www.allwatchers.com and www.allreaders.com are web sites in the sense that 
a file is downloaded to the user's browser when he or she surfs to these 
addresses. But that's where the similarity ends. These web pages are front-
ends, gates to underlying databases. The databases contain records 
regarding the plots, themes, characters and other features of, 
respectively, movies and books. Every user-query generates a unique web 
page whose contents are determined by the query parameters.The number of 
singular pages thus capable of being generated is mind boggling. Search 
engines operate on the same principle - vary the search parameters slightly 
and  
totally new pages are generated. It is a dynamic, user-responsive and 
chimerical sort of web.
 
These are good examples of what www.brightplanet.com call the "Deep Web" 
(previously inaccurately described as the "Unknown or Invisible Internet"). 
They believe that the Deep Web is 500 times the size of the "Surface 
Internet" (a portion of which is spidered by traditional search engines). 
This translates to c. 7500 TERAbytes of data (versus 19 terabytes in the 
whole known web, excluding the databases of the search engines themselves) 
- or 550 billion documents organized in 100,000 deep web sites. By 
comparison, Google, the most comprehensive search engine ever, stores 1.4 
billion documents in its immense caches at www.google.com. The natural 
inclination  
to dismiss these pages of data as mere re-arrangements of the same 
information is wrong. Actually, this underground ocean of 
covertintelligence is often more valuable than the information freely 
available or easily accessible on the surface. Hence the ability of c. 5% 
of these databases to charge their users subscription and membership fees. 
The average deep web site receives 50% more traffic than a typical surface 
site and is much more linked to by other sites. Yet it is transparent to 
classic search engines and little known to the surfing public.
 
It was only a question of time before someone came up with a search 
technology to tap these depths (www.completeplanet.com).
 
LexiBot, in the words of its inventors, is...
 
"...the first and only search technology capable of identifying, 
retrieving, qualifying, classifying and organizing "deep" and "surface" 
content from the World Wide Web.  The LexiBot allows searchers to dive deep 
and explore hidden data from multiple sources simultaneously using directed 
queries. Businesses, researchers and consumers now have access to the most 
valuable and hard-to-find information on the Web and can retrieve it with 
pinpoint accuracy."
 
It places dozens of queries, in dozens of threads simultaneously and 
spiders the results (rather as a "first generation" search engine would 
do). This could prove very useful with massive databases such as the human 
genome, weather patterns, simulations of nuclear explosions, thematic, 
multi-featured databases, intelligent agents (e.g., shopping bots) and 
third generation search engines. It could also have implications on the 
wireless internet (for instance, in analysing and generating location-
specific advertising) and on e-commerce (which amounts to the dynamic 
serving of web documents).
 
This transition from the static to the dynamic, from the given to the 
generated, from the one-dimensionally linked to the multi-dimensionally 
hyperlinked, from the deterministic content to the contingent, 
heuristically-created and uncertain content - is the real revolution and 
the future of the web. Search engines have lost their efficacy as gateways. 
Portals have taken over but most people now use internal links (within the 
same web site) to get from one place to another. This is where the deep web 
comes in. Databases are about internal links. Hitherto they existed in 
splendid isolation, universes closed but to the most persistent and 
knowledgeable. This may be  
about to change. The flood of quality relevant information this will 
unleash will dramatically dwarf anything that preceded it.

 
The Seamless Internet
By: Sam Vaknin

http://www.enfish.com/
 
The hype over ubiquitous (or pervasive) computing (computers everywhere) 
has masked a potentially more momentous development. It is the convergence 
of computing devices interfaces with web (or other) content. Years ago - 
after Bill Gates overcame his misplaced scepticism - Microsoft introduced 
their "internet-ready" applications. Its word processing software ("Word"), 
other Office applications, and the Windows operating system handle both 
"local" documents (resident on the user's computer) and web pages smoothly 
and seamlessly. The transition between the desktop or laptop interfaces and 
the web is today effortlessly transparent.
 
The introduction of e-book readers and MP3 players has blurred the 
anachronistic distinction between hardware and software. Common speech 
reflects this fact. When we say "e-book", we mean both the device and the 
content we access on it. As technologies such as digital ink and printable 
integrated circuits mature - hardware and software will have completed 
their inevitable merger.
 
This erasure of boundaries has led to the emergence of knowledge management 
solutions and personal and shared workspaces. The LOCATION of a document 
(one's own computer, a colleague's PDA, or a web page) has become 
irrelevant. The NATURE of the document (e-mail message, text file, video 
snippet, soundbite) is equally unimportant. The SOURCE of the document (its 
extension, which tells us on which software it was created and can be read) 
is increasingly meaningless. Universal languages (such as Java) allow 
devices and applications to talk to each other. What matters are 
accessibility and logical and user-friendly work-flows.
 
Enter Enfish. In its own words, it provides:
 
"...Personalized portal solution linking personal and corporate knowledge 
with relevant information from the Internet, ...live-in desktop environment 
providing co-branding and customization opportunities on and offline, a 
unique, private communication channel to users that can be used also for 
eBusiness solutions, ...Knowledge Management solution that requires no user 
set-up or configuration."
 
The principle is simple enough - but the experience is liberating (try 
their online flash demo). Suddenly, instead of juggling dozens of windows, 
a single interface provides the tortured user (that's I) with access to all 
his applications: e-mail, contacts, documents, the company's intranet or 
network, the web and OPC's (other people's computers, other networks, other 
intranets). There is only a single screen and it is dynamically and 
automatically updated to respond to the changing information needs of the 
user.
 
"The power underlying Enfish Onespace is its patented DEX 'engine.' This 
technology creates a master, cross-referenced index of the contents of a 
user's email, documents and Internet information. The Enfish engine then 
uses this master index as a basis to understand what is relevant to a user, 
and to provide them with appropriate information. In this manner Enfish 
Onespace 'personalizes' the Internet for each user, automatically 
connecting relevant information and services from the Internet with the 
user's desktop information.
 
As an example, by clicking on a person or company, Enfish Onespace 
automatically assembles a page that brings together related emails, 
documents, contact information, appointments, news and relevant news 
headlines from the Internet. This is accomplished without the user working 
to find and organize this information. By having everything in one place 
and in context, our users are more informed and better prepared to perform 
tasks such as handling a phone call or preparing for a business meeting. 
This results in ... benefits in productivity and efficiency."
 
It is, indeed, addictive. The inevitable advent of transparent computing 
(smart houses, smart cards, smart clothes, smart appliances, wireless 
Internet) - coupled with the single GUI (Graphic User Interface) approach 
can spell revolution in our habits. Information will be available to us 
anywhere, through an identical screen, communicated instantly and 
accurately from device to device, from one appliance to another and from 
one location to the next as we move. The underlying software and hardware 
will become as arcane and mysterious as are the ASCII and ASSEMBLY 
languages to the average computer user today. It will be a real partnership 
of biological and artificial  
intelligence on the move.
 
  
The Polyglottal Internet
By: Sam Vaknin
 
http://www.everymail.com/ 
The Internet started off as a purely American phenomenon and seemed to 
perpetuate the fast-emerging dominance of the English language. A 
negligible minority of web sites were in other languages. Software 
applications were chauvinistically ill-prepared (and still are) to deal 
with anything but English. And the vast majority of net users were 
residents of the two North-American colossi, chiefly the USA. 
All this started to change rapidly about two years ago. Early this year, 
the number of American users of the Net was surpassed by the swelling tide 
of European and Japanese ones. Non-English web sites are proliferating as 
well. The advent of the wireless Internet - more widespread outside the USA 
- is likely to strengthen this unmistakable trend. By 2005, certain 
analysts expect non-English speakers to make up to 70% of all netizens. 
This fragmentation of an hitherto unprecedentedly homogeneous market - 
presents both opportunities and costs. It is much more expensive to market 
in ten languages than it is in one. Everything - from e-mail to supply 
chains has to be re-tooled or customized. 
It is easy to translate text in cyberspace. Various automated, web-based, 
and free applications (such as Babylon or Travlang) cater to the needs of 
the casual user who doesn't mind the quality of the end-result. Virtually 
every search engine, portal and directory offers access to these or similar 
services. 
But straightforward translation is only one kind of solution to the tower 
of Babel that the Internet is bound to become. 
Enter WorldWalla. A while back I used their multi-lingual e-mail 
application. It converted text I typed on a virtual keyboard to images (of 
characters). My addressees received the message in any language I selected. 
It was more than cool. It was liberating. Along the same vein, WorldWalla's 
software allows application and content developers to work in 66 languages. 
In their own words: 
"WordWalla allows device manufacturers and application developers to meet 
this challenge by developing products that support any language. This 
simplifies testing and configuration management, accelerates time to 
market, lowers unit costs and allows companies to quickly and easily enter 
new markets and offer greater levels of personalization and customer 
satisfaction." 
GlobalVu converts text to device-independent images. GlobalEase Web is a 
"Java-based multilingual text input and display engine". It includes 
virtual keyboards, front-end processors, and a contextual processor and 
text layout engine for left to right and right to left language formatting. 
They have versions tailored to the specifications of mobile devices. 
The secret is in generating and processing images (bitmaps), compressing 
them and transmitting them. In a way, WordWalla generates a FACSIMILE 
message (the kind we receive on our fax machines) every time text is 
exchanged. It is transparent to both sender and receiver - and it makes a 
user-driven polyglottal Internet a reality. 



Deja Googled
By: Sam Vaknin
http://groups.google.com/ 
http://groups.google.com/googlegroups/archive_announce.html 
The Internet may have started as the fervent brainchild of DARPA, the US 
defence agency - but it quickly evolved into a network of computers at the 
service of a community. Academics around the world used it to communicate, 
compare results, compute, interact and flame each other. The ethos of the 
community as content-creator, source of information, fount of emotional 
sustenance, peer group, and social substitute is well embedded in the very 
fabric of the Net. Millions of members in free, advertising or subscription 
financed, mega-sites such as Geocities, AOL, Yahoo and Tripod generate more 
bits and bytes than the rest of the Internet combined. This traffic 
emanates from discussion groups, announcement (mailing) lists, newsgroups, 
and content sites (such as Suite101 and Webseed). Even the occasional 
visitor can find priceless gems of knowledge and opinion in the mound of 
trash and frivolity that these parts of the web have become. 
The emergence of search engines and directories which cater only to this 
(sizeable) market segment was to be expected. By far the most comprehensive 
(and, thus, less discriminating) was Deja. It spidered and took in the 
exploding newsgroups (Usenet) scene with its tens of thousands of daily 
messages. When it was taken over by Google, its archives contained more 
than 500 million messages, cross-indexed  every which way and pertaining to 
every possible (and many impossible) a topic. 
Google is by far the most popular search engine yet, having surpassed the 
more veteran Northern Lights, Fast, and Alta Vista. Its mind defying 
database (more than 1.3 billion web pages), its caching technology (making 
it, in effect, one of the biggest libraries on earth) and its site ranking 
(by popularity and links-over) have rendered it unbeatable. Yet, its 
efforts to integrate the treasure trove that is Deja and adapt it to the 
Google search interface have hitherto been spectacularly unsuccessful 
(though it finally made it two and a half months after the purchase). So 
much so, that it gave birth to a protest movement. 
Bickering and bad tempered flaming (often bordering on the deranged, the 
racial, or the stalking) are the more repulsive aspects of the Usenet 
groups. But at the heart of the debate this time is no ordinary sadistic 
venting. The issue is: who owns content generated by the public at large on 
computers funded by tax dollars? Can a commercial enterprise own and 
monopolize the fruits of the collective effort of millions of individuals 
from all over the world? Or should such intellectual property remain in the 
public domain, perhaps maintained by public institutions (such as the 
Library of Congress)? Should open source movements gain access to Deja's 
source code in order to launch Deja II? And who owns the copyright to all 
these messages (theoretically, the authors)? Google, as Deja before it, is 
offering compilations of this content, the copyright to which it does not 
and cannot own. The very legal concept of intellectual property is at the 
crux of this virtual conflict. 
Google was, thus, compelled to offer free access to the CONTENT of the Deja 
archives to alternative (non-Google) archiving systems. But it remains mum 
on the search programming code and the user interface. Already one such 
open source group (called Dela News) is coalescing, although it is not 
clear who will bear the costs of the gigantic storage and processing such a 
project would require. Dela wants to have a physical copy of the archive 
deposited in trust with a dot org. 
This raises a host of no less fascinating subjects. The Deja Usenet search 
technology, programming code, and systems are inextricable and almost 
indistinguishable from the Usenet archive itself. Without these elements - 
structural as well as dynamic - there will be no archive and no way to 
extract meaningful information from the chaotic bedlam that is the Usenet 
environment. In this case, the information lies in the ordering and 
classification of raw data and not in the content itself. This is why the 
open source proponents demand that Google share both content and the tools 
to access it. Google's hasty and improvised unplugging of Deja in February 
only served to aggravate the die-hard fans of erstwhile Deja. 
The Usenet is not only the refuge of pedophiles and neo-Nazis. It includes 
thousands of academically rigorous and research inclined discussion groups 
which morph with intellectual trends and fashionable subjects. More than 
twenty years of wisdom and erudition are buried in servers all over the 
world. Scholars often visit Usenet in their pursuit of complementary 
knowledge or expert advice. The Usenet is also the documentation of Western 
intellectual history in the last three decades. In it invaluable. Google's 
decision to abandon the internal links between Deja messages means the 
disintegration of the hyperlinked fabric of this resource - unless Google 
comes up with an alternative (and expensive) solution. 
Google is offering a better, faster, more multi-layered and multi-faceted 
access to the entire archive. But its brush with the more abrasive side of 
the open source movement brought to the surface long suppressed issues. 
This may be the single most important contribution of this otherwise not so 
opportune transaction.  
  
Maps of Cyberspace
By: Sam Vaknin
 
"Cyberspace. A consensual hallucination experienced daily by billions of 
legitimate operators, in every nation, by children being taught 
mathematical concepts...A graphical representation of data abstracted from 
the banks of every computer in the human system. Unthinkablecomplexity. 
Lines of light ranged in the non-space of the mind, clusters and 
constellations of data. Like city lights, receding..." (William Gibson, 
"Neuromancer", 1984, page 51) 
http://www.ebookmap.net/maps.htm 
http://www.cybergeography.org/atlas/atlas.html 
At first sight, it appears to be a static, cluttered diagram with 
multicoloured, overlapping squares. Really, it is an extremely powerfulway 
of presenting the dynamics of the emerging e-publishing industry. R2 
Consulting has constructed these eBook Industry Maps to "reflect the 
evolving business models among publishers, conversion houses, digital 
distribution companies, eBook vendors, online retailers, libraries, library 
vendors, authors, and many others.  These maps are 3-dimensionaloffering 
viewers both a high-level orientation to the eBook landscape and an in-
depth look at multiple eBook models and the partnerships that have formed 
within each one." Pass your mouse over any of the squares and a virtual 
floodgate opens - a universe of interconnected and hyperlinked names, a 
detailed atlas of who does what to whom. 
eBookMap.net is one example of a relatively novel approach to databases and 
web indexing. The metaphor of cyber-space comes alive in spatial, two and 
three dimensional map-like representations of the world of knowledge in 
Cybergeography's online "Atlas". Instead of endless, static and bi-
chromatic lists of links - Cybergeography catalogues visual,recombinant 
vistas with a stunning palette, internal dynamics and an intuitively 
conveyed sense of inter-relatedness. Hyperlinks are incorporated in the 
topography and topology of these almost-neural maps. 
"These maps of Cyberspaces - cybermaps - help us visualise and comprehend 
the new digital landscapes beyond our computer screen, in the wires of the 
global communications networks and vast online information resources. The 
cybermaps, like maps of the real-world, help us navigate the new 
information landscapes, as well being objects of aesthetic interest. They 
have been created by 'cyber-explorers' of many different disciplines, and 
from all corners of the world. Some of the maps ... in the Atlas of 
Cyberspaces ... appear familiar, using the cartographicconventions of real-
world maps, however, many of the maps are much more abstract 
representations of electronic spaces, using new metrics and grids." 
Navigating these maps is like navigating an inner, familiar, territory. 
They come in all shapes and modes: flow charts, quasi-geographical maps, 3-
d simulator-like terrains and many others. The "web Stalker" is an 
experimental web browser which is equipped with mapping functions. The 
range of applicability is mind boggling. 
A (very) partial list: 
The Internet Genome Project - "open-source map of the major conceptual 
components of the Internet and how they relate to each other" 
Anatomy of a Linux System - Aimed to "...give viewers a concise and 
comprehensive look at the Linux universe' and at the heart of the poster is 
a gravity well graphic showing the core software components,surrounded by 
explanatory text" 
NewMedia 500 - The financial, strategic, and other inter-relationshipsand 
interactions between the leading 500 new (web) media firms 
Internet Industry Map - Ownership and alliances determine status, control, 
and access in the Internet industry. A revealing organizational chart. 
The Internet Weather Report measures Internet performance, latency periods 
and downtime based on a sample of 4000 domains. 
Real Time Geographic Visualization of WWW Traffic - a stunning, 3-d 
representation of web usage and traffic statistics the world over. 
WebBrain and Map.net provide a graphic rendition of the Open Directory 
Project. The thematic structure of the ODP is instantly discernible. 
The WebMap is a visual, multi-category directory which contains 2,000,000 
web sites. The user can zoom in and out of sub-categories and "unlock" 
their contents. 
Maps help write fiction, trace a user's clickpath (replete with clickable 
web sites), capture Usenet and chat interactions (threads), plot search 
results (though Alta Vista discontinued its mapping service and Yahoo!3D is 
no more), bookmark web destinations, and navigate through complex sites. 
Different metaphors are used as interface. Web sites are represented as 
plots of land, stars (whose brightness corresponds to the web site's 
popularity ranking), amino-acids in DNA-like constellations,topographical 
maps of the ocean depths, buildings in an urban landscape, or other objects 
in a pastoral setting. Virtual Reality (VR) maps allow information to be 
simultaneously browsed by teams of collaborators, sometimes represented as 
avatars in a fully immersive environment. In many applications, the user is 
expected to fly amongst the data items in virtual landscapes. With the 
advent of sophisticated GUI's (Graphic UserInterfaces) and VRML (Virtual 
Reality Markup Language) - these maps may well show us the way to a more 
colourful and user-friendly future. 



THE INTERNET AND THE DIGITAL DIVIDE
 
The Internet  A Medium or a Message?
By: Sam Vaknin
The State of the Net  
An Interim Report about the Future of the Internet 
 
Who are the participants who constitute the Internet? 
?	Users - connected to the net and interacting with it 
?	The communications lines and the communications equipment 
?	The intermediaries (e.g. the suppliers of on-line information or 
access providers). 
?	Hardware manufacturers 
?	Software authors and manufacturers (browsers, site development tools, 
specific applications, smart agents, search engines and others). 
?	The "Hitchhikers" (search engines, smart agents, Artificial 
Intelligence - AI - tools and more) 
?	Content producers and providers 
?	Suppliers of financial wherewithal (currently - corporate and 
institutional cash gradually being replaced by advertising money) 
The fate of each of these components - separately and in solidarity - will 
determine the fate of the Internet. 
The first phase of the Internet's history was dominated by computer 
wizards. Thus, any attempt at predicting its future dealt mainly with its 
hardware and software components. 
Media experts, sociologists, psychologists, advertising and marketing 
executives were left out of the collective effort to determine the future 
face of the Internet. 
As far as content is concerned, the Internet cannot be currently defined as 
a medium. It does not function as one - rather it is a very disordered 
library, mostly incorporating the writings of non-distinguished 
megalomaniacs. It is the ultimate Narcissistic experience. The forceful 
entry of publishing houses and content aggregators is changing this dismal 
landscape, though. 
Ever since the invention of television there hasn't been anything as 
begging to become a medium as the Internet. 
Three analogies spring to mind when contemplating the Internet in its 
current state: 
?	A chaotic library 
?	A neural network or the latter day equivalent of previous networks 
(telegraph, telephony, railways) 
?	A new continent 
These metaphors prove to be very useful (even business-wise). They permit 
us to define the commercial opportunities embedded in the Internet. 
Yet, they fail to assist us in predicting its future in its transformation 
into a medium. 
How does an invention become a medium? What happens to it when it does 
become one? What is the thin line separating the initial functioning of the 
invention from its transformation into a new medium? In other words: when 
can we tell that some technological advance gave birth to a new medium? 
This work also deals with the image of the Internet once transformed into a 
medium. 
The Internet has the most unusual attributes in the history of media. 
It has no central structure or organization. It is hardware and software 
independent. It (almost) cannot be subjected to legislation or to 
regulation. Consider the example of downloading music from the internet - 
is it tantamount to an act of recording music (a violation of copyright 
laws)? This has been the crux of the legal battle between Diamond 
Multimedia (the manufacturers of the Rio MP3 device), MP3.com and Napster 
and the recording industry in America. 
The Internet's data transfer channels are not linear - they are random. 
Most of its "broadcast" cannot be "received" at all. It allows for the 
narrowest of narrowcasting through the use of e-mail mailing lists, 
discussion groups, message boards, private radio stations, and chats. And 
this is but a small portion of an impressive list of oddities. These 
idiosyncrasies will also shape the nature of the Internet as a medium. 
Growing out of bizarre roots - it is bound to yield strange fruit as a 
medium. 
So what business opportunities does the Internet represent? 
I believe that they are to be found in two broad categories: 
?	Software and hardware related to the Internet's future as a medium 
?	Content creation, management and licencing 
 
 
The Map of Terra Internetica 
 
The Users 
How many Internet users are there? How many of them have access to the Web 
(World Wide Web - WWW) and use it? There are no unequivocal statistics. 
Those who presume to give the answers (including the ISOC - the Internet 
SOCiety) - rely on very partial and biased resources. Others just bluff. 
Yet, everyone seems to agree that there are, at least, 100 million active 
participants in North America (the Nielsen and Commerce-Net reports). 
The future is, inevitably, even more vague than the present. Authoritative 
consultancy firms predict 66 million active users in 10 years time. IBM 
envisages 700 million users. MCI is more modest with 300 million. At the 
end of 1999 there were 130 million registered (though not necessarily 
active) users. 
The Internet - an Elitist and Chauvinistic Medium 
The average user of the Internet is young (30), with an academic background 
and high income. The percentage of the educated and the well-to-do among 
the users of the Web is three times as high as their proportion in the 
population. This is fast changing only because their children are joining 
them (6 million already had access to the Internet at the end of 1996 - and 
were joined by another 24 million by the end of the decade). This may 
change only due to presidential initiatives to bridge the "digital divide" 
(from Al Gore's in the USA to Mahatir Mohammed's in Malaysia), corporate 
largesse and institutional involvement (e.g., Open Society in Eastern 
Europe, Microsoft in the USA). These efforts will spread the benefits of 
this all-powerful tool among the less privileged. A bit less than 50% of 
all users are men but they are responsible for 60% of the activity in the 
net (as measured by traffic). 
Women seem to limit themselves to electronic mail (e-mail) and to 
electronic shopping of goods and services, though this is changing fast. 
Men prefer information, either due to career requirements or because 
knowledge is power. 
Most of the users are of the "experiencer" variety. They are leaders of 
social change and innovative. This breed inhabits universities, fashionable 
neighbourhoods and trendy vocations. This is why some wonder if the 
Internet is not just another fad, albeit an incredibly resilient and 
promising one. 
Most users have home access to the Internet - yet, they still prefer to 
access it from work, at their employer's expense, though this preference is 
slight and being eroded. Most users are, therefore, exploitative in nature. 
Still, we must not forget that there are 37 million households of the self-
employed and this possibly distorts the statistical picture somewhat. 
The Internet - A Western Phenomenon 
Not African, not Asian (with the exception of Israel and Japan), not 
Russian , nor a Third World phenomenon. It belongs squarely to the wealthy, 
sated world. It is the indulgence of those who have everything and whose 
greatest concern is their choice of nightly entertainment. Between 50-60% 
of all Internet users live in the USA, 5-10% in Canada. The Internet is 
catching on in Europe (mainly in Germany and in Scandinavia) and, in its 
mobile form (i-mode) in Japan. The Internet lost to the French Minitel 
because the latter provides more locally relevant content and because of 
high costs of communications and hardware. 
Communications 
Most computer owners still possess a 28,800 bps modem. This is much like 
driving a bicycle on a German Autobahn. The 56,600 bps is gradually 
replacing its slower predecessor (48% of computers with modems) - but even 
this is hardly sufficient. To begin to enjoy video and audio (especially 
the former) - data transfer rates need to be 50 times faster. 
Half the households in the USA have at least 2 telephones and one of them 
is usually dedicated to data processing (faxes or fax-modems). 
The ISDN could constitute the mid-term solution. This data transfer network 
is fairly speedy and covers 70% of the territory of the USA. It is growing 
by 100% annually and its sales topped 10 billion USD in 1995/6. 
Unfortunately, it is quite clear that ISDN is not THE answer. It is too 
slow, too user-unfriendly, has a bad interface with other network types, it 
requires special hardware. There is no point in investing in temporary 
solutions when the right solution is staring the Internet in the face, 
though it is not implemented due to political circumstances. 
A cable modem is 80 times speedier than the ISDN and 700 times faster than 
a 14,400 bps modem. However, it does have problems in accommodating a two-
way data transfer. There is also need to connect the fibre optic 
infrastructure which characterizes cable companies to the old copper 
coaxial infrastructure which characterizes telephony. Cable users engage 
specially customized LANs (Ethernet) and the hardware is expensive (though 
equipment prices are forecast to collapse as demand increases). Cable 
companies simply did not invest in developing the technology. The law 
(prior to the 1996 Communications Act) forbade them to do anything that was 
not one way transfer of video via cables. Now, with the more liberal 
regulative environment, it is a mere question of time until the technology 
is found. 
Actually, most consumers single out bad customer relations as their biggest 
problem with the cable companies - rather than technology. 
Experiments conducted with cable modems led to a doubling of usage time 
(from an average of 24 to 47 hours per month per user) which was wholly 
attributable to the increased speed. This comes close to a cultural 
revolution in the allocation of leisure time. Numerically speaking: 7 
million households in the USA are fitted with a two-way data transfer cable 
modems. This is a small number and it is anyone's guess if it constitutes a 
critical mass. Sales of such modems amount to 1.3 billion USD annually. 
50% of all cable subscribers also have a PC at home. To me it seems that 
the merging of the two technologies is inevitable. 
Other technological solutions - such as DSL, ADSL, and the more promising 
satellite broadband - are being developed and implemented, albeit slowly 
and inefficiently. Coverage is sporadic and frustrating waiting periods are 
measured in months. 
Hardware and Software 
Most Internet users (82%) work with the Windows operating system. About 11% 
own a Macintosh (much stronger graphically and more user-friendly). Only 7% 
continue to work on UNIX based systems (which, historically, fathered the 
Internet) - and this number is fast declining. A strong entrant is the free 
source LINUX operating system. 
Virtually all users surf through a browsing software. A fast dwindling 
minority (26%) use Netscape's products (mainly Navigator and Communicator) 
and the majority use Microsoft's Explorer (more than 60% of the market). 
Browsers are now free products and can be downloaded from the Internet. As 
late as 1997, it was predicted by major Internet consultancy firms that 
browser sales will top $4 billion by the year 2000. Such misguided 
predictions ignored the basic ethos of the Internet: free products, free 
content, free access.
Browsers are in for a great transformation. Most of them are likely to have 
3-D, advanced audio, telephony / voice / video mail (v-mail), instant 
messaging, e-mail, and video conferencing capabilities integrated into the 
same browsing session. They will become self-customizing, intelligent, 
Internet interfaces. They will memorize the history of usage and user 
preferences and adapt themselves accordingly. They will allow content-
specificity: unidentifiable smart agents will scour the Internet, make 
recommendations, compare prices, order goods and services and customize 
contents in line with self-adjusting user profiles. 
Two important technological developments must be considered: 
PDAs (Personal Digital Assistants) - the ultimate personal (and office) 
communicators, easy to carry, they provide Internet (access) Everywhere, 
independent of suppliers and providers and of physical infrastructure (in 
an aeroplane, in the field, in a cinema). 
The second trend: wireless data transfer and wireless e-mail, whether 
through pagers, cellular phones, or through more sophisticated apparatus 
and hybrids such as smart phones. Geotech's products are an excellent 
example: e-mail, faxes, telephone calls and a connection to the Internet 
and to other, public and corporate, or proprietary, databases - all 
provided by the same gadget. This is the embodiment of the electronic, 
physically detached, office. Wearable computing should be considered a part 
of this "ubiquitous or pervasive computing" wave. 
We have no way of gauging - or intelligently guessing - the part of the 
mobile Internet in the total future Internet market but it is likely to 
outweigh the "fixed" part. Wireless internet meshes well with the trend of 
pervasive computing and the intelligent home and office. Household gadgets 
such as microwave ovens, refrigerators and so on will connect to the 
internet via a wireless interface to cull data, download information, order 
goods and services, report their condition and perform basic maintenance 
functions. Location specific services (navigation, shopping 
recommendations, special discounts, deals and sales, emergency services) 
depend on the technological confluence between GPS (stallite-based 
geolocation technology) and wireless Internet.
Suppliers and Intermediaries 
"Parasitic" intermediaries occupy each stage in the Internet's food chain. 
Access to the Internet is still provided by "dumb pipes" - the Internet 
Service Providers (ISP) 
Content is still the preserve of content suppliers and so on. 
Some of these intermediaries are doomed to gradually fade or to suffer a 
substantial diminishing of their share of the market. Even "walled gardens" 
of content (such as AOL) are at risk.
By way of comparison, even today, ISPs have four times as many subscribers 
(worldwide) as AOL. Admittedly, this adversely affects the quality of the 
Internet - the infrastructure maintained by the phone companies is slow and 
often succumbs to bottlenecks. The unequivocal intention of the telephony 
giants to become major players in the Internet market should also be taken 
into account. The phone companies will, thus, play a dual role: they will 
provide access to their infrastructure to their competitors (sometimes, 
within a real or actual monopoly) - and they will compete with their 
clients. The same can be said about the cable companies. Controlling the 
last mile to the user's abode is the next big business of the Internet. 
Companies such as AOL are disadvantaged by these trends. It is imperative 
for AOL to obtain equal access to the cable company's backbone and 
infrastructure if it wants to survive. Hence its merger with Time Warner. 
No wonder that many of the ISPs judge this intrusion on their turf by the 
phone and cable companies to constitute unfair competition. Yet, one should 
not forget that the barriers to entry are very low in the ISP market. It 
takes a minimal investment to become an ISP. 200 modems (which cost 200 USD 
each) are enough to satisfy the needs of 2000 average users who generate an 
income of 500,000 USD per annum to the ISP. Routers are equally as cheap 
nowadays. This is a nice return on the ISP's capital, undoubtedly. 
The Hitchhikers 
The Web houses the equivalent of 100 billion pages. Search Engine 
applications are used to locate specific information in this impressive, 
constantly proliferating library. They will be replaced, in the near 
future, by "Knowledge Structures" - gigantic encyclopaedias, whose text 
will contain references (hyperlinks) to other, relevant, sites. The far 
future will witness the emergence of the "Intelligent Archives" and the 
"Personal Newspapers" (read further for detailed explanations). Some 
software applications will summarize content, others will index and 
automatically reference and hyperlink texts (virtual bibliographies). An 
average user will have an on-going interest in 500 sites. Special software 
will be needed to manage address books ("bookmarks", "favourites") and 
contents ("Intelligent Addressbooks"). The phenomenon of search engines 
dedicated to search a number of search engines simultaneously will grow 
("Hyper- or meta- engines"). Meta-engines will work in the background and 
download hyperlinks and advertising (the latter is essential to secure the 
financial interest of site developers and owners). Statistical software 
which tracks ("how long was what done"), monitors ("what did they do while 
in the site") and counts ("how many") visitors to sites already exists. 
Some of these applications have back-office facilities (accounting, follow-
up, collections, even tele-marketing). They all provide time trails and 
some allow for auditing. 
This is but a small fragment of the rapidly developing net-scape: people 
and enterprises who make a living off the Internet craze rather than off 
the Internet itself. Everyone knows that there is more money in lecturing 
about how to make money on the Internet - than in the Internet itself. This 
maxim still holds true despite the 32 billion US dollars in E-commerce in 
1998. Business to Consumer (B2C) sales grow less vigorously than Business 
to Business (B2B) sales and are likely to suffer another blow with the 
advent of Peer to Peer (P2P) computer networks. The latter allow PCs to act 
as servers and thus enable the swapping of computer files asmong connected 
users (with or without a central directory). 
Content Suppliers 
This is the underprivileged sector of the Internet. They all lose money 
(even e-tailers which offer basic, standardized goods - books, CDs - with 
the exception, until September 11, of sites connected to tourism). No one 
thanks them for content produced with the investment of a lot of effort and 
a lot of money. A really qualitative, fully commerce enabled site costs up 
to 5,000,000 USD, excluding site maintenance and customer and visitor 
services. Content providers are constantly criticized for lack of 
creativity or for too much creativity. More and more is asked of them. They 
are exploited by intermediaries, hitchhikers and other parasites. This is 
all an off-shoot of the ethos of the Internet as a free content area. 
More than 100 million men and women constantly access the Web - but this 
number stands to grow (the median prediction: 300 million). Yet, while the 
Web is used by 35% of those with access to the Internet - e-mail is used by 
more than 60%. E-mail is by far the most common function ("killer app") and 
specialized applications (Eudora, Internet Mail, Microsoft Exchange) - free 
or ad sponsored - keep it accessible to all and user-friendly. 
Most of the users like to surf (browse, visit sites) the net without reason 
or goal in mind. This makes it difficult to apply traditional marketing 
techniques. 
What is the meaning of "targeted audiences" or "market shares" in this 
context? 
If a surfer visits sites which deal with aberrant sex and nuclear physics 
in the same session - what to make of it? 
The public and legislative backlash against the gathering of surfers' data 
by Internet ad agencies and other web sites - has led to growing ignorance 
regarding the profile of Internet users, their demography, habits, 
preferences and dislikes. 
People like the very act of surfing. They want to be entertained, then they 
use the Internet as a working tool, mostly in the service of their 
employer, who, usually foots the bill. Users love free downloads (mainly 
software). 
"Free" is a key word on the Internet: it used to belong to the US 
Government and to a bunch of universities. Users like information, with 
emphasis on news and data about new products. But they do not like to shop 
on the net - yet. Only 38% of all surfers made a purchase during 1998. 
67% of them adore virtual sex. 50% of the sites most often visited are porn 
sites (this is reminiscent of the early days of the Video Cassette Recorder 
- VCR). People dedicate the same amount of time to watching video cassettes 
or television as they do to surfing the net. The Internet seems to 
cannibalize television. 
Sex is followed by music, sports, health, television, computers, cinema, 
politics, pets and cooking sites. People are drawn to interactive games. 
The Internet will shortly enable people to gamble, if not hampered by 
legislation. 10 billion USD in gambling money are predicted to pass through 
the net. This makes sense: nothing like a computer to provide immediate 
(monetary and psychological) rewards. 
Commerce on the net is another favourite. The Internet is a perfect medium 
for the sale of software and other digital products (e-books). The problem 
of data security is on its way to being solved with the SET (or other) 
world standard. 
As early as 1995, the Internet had more than 100 virtual shopping malls 
visited by 2.5 million shoppers (and probably double this number in 1996). 
The predictions for 1999 were between 1-5 billion USD of net shopping (plus 
2 billion USD through on-line information providers, such as CompuServe and 
AOL) - proved woefully inaccurate. The actual number in 1998 was 7 times 
the prediction for 1999. 
It is also widely believed that circa 20% of the family budget will pass 
through the Internet as e-money and this amounts to 150 billion USD. 
The Internet will become a giant inter-bank clearing system and varied ATM 
type banking and investment services will be provided through it. 
Basically, everything can be done through the Internet: looking for a job, 
for instance. 
Yet, the Internet will never replace human interaction. People are likely 
to prefer personal banking, window shopping and the social experience of 
the shopping mall to Internet banking and e-commerce, or m-commerce. 
Some sites already sport classified ads. This is not a bad way to defray 
expenses, though most classified ads are free (it is the advertising they 
attract that matters). 
Another developing trend is website-rating and critique. It will be treated 
the way today's printed editions are. It will have a limited influence on 
the consumption decisions of some users. Browsers already sport buttons 
labelled "What's New" and "What's Hot". Most Search Engines recommend 
specific sites. Users are cautious. Studies discovered that no user, no 
matter how heavy, has consistently re-visited more than 200 sites, a 
minuscule number. The 10 most popular web sites (Yahoo!, MSN, etc.) 
attracted more than 50% of all Internet traffic. Site recommendation 
services often produce random - at times, wrong - selections for their 
user. There are also concerns regarding privacy issues. The backlah against 
Amazon's "readers' circles" is an example. 
Web Critics, who work today mainly for the printed press, will publish 
their wares on the net and will link to intelligent software which will 
hyperlink, recommend and refer. Some web critics will be identified with 
specific applications - really, expert systems which will incorporate their 
knowledge and experience. 
The Money 
Where will the capital needed to finance all these developments come from? 
Again, there are two schools: 
One says that sites will be financed through advertising - and so will 
search engines and other applications accessed by users. 
Certain ASPs (Application Service Providers which rent out access to 
application software which resides on their servers) are considering this 
model. 
The second version is simpler and allows for the existence of non-
commercial content. 
It proposes to collect negligible sums (cents or fractions of cents) from 
every user for every visit ("micro-payments") or a subscription fee. These 
accumulated cents or subscription fees will enable the owners of old sites 
to update and to maintain them and encourage entrepreneurs to develop new 
ones. Certain content aggregators (especially of digital textbooks) have 
adopted this model (Questia, Fathom). 
The adherents of the first school pointed at the 5 million USD invested in 
advertising during 1995 and to the 60 million or so invested during 1996. 
Its opponents point exactly at the same numbers: ridiculously small when 
contrasted with more conventional advertising modes. The potential of 
advertising on the net is limited to 1.5 billion USD annually in 1998, 
thundered the pessimists (many thought that even half that would be very 
nice). The actual figure was double the prediction but still woefully small 
and inadequate to support the Internet's content development. 
Compare these figures to the sale of Internet software ($4 billion), 
Internet hardware ($3 billion), Internet access provision ($4.2 billion) in 
1995. 
Hembrecht and Quist estimated that Internet related industries scooped up 
23.2 billion USD annually (A report released in mid-1996). 
And what follows advertising is hardly more enocuraging.
The consumer interacts and the product is delivered to him. This - the 
delivery phase - is a slow and enervating epilogue to the exciting affair 
of ordering through the net at the speed of light. Too many consumers still 
complain that they do not receive what they ordered, or that delivery is 
late and products defective. 
The solution may lie in the integration of advertising and content. 
Pointcast, for instance, integrated advertising into its news broadcasts, 
continuously streamed to the user's screen, even when inactive (they 
provided a downloadable active screen saver and ticker in a "push 
technology"). Downloading of digital music, video and text (e-books) will 
lead to immediate gratification of the consumer and will increase the 
efficacy of advertising. 
Whatever the case may be, a uniform, agreed upon system of rating as a 
basis for charging advertisers, is sorely needed. There is also the 
question of what does the advertiser pay for? 
Many advertisers (Procter and Gamble, for instance) refuse to pay according 
to the number of hits or impressions (=entries, visits to a site). They 
agree to pay only according to the number of the times that their 
advertisement was hit (page views).
This different basis for calculation is likely to upset all revenue 
scenarios. 
Very few sites of important, respectable newspapers are on a subscription 
basis. Dow Jones (Wall Street Journal) and The Economist, to mention but 
two. 
Will this become the prevailing trend? 
 
 
The Internet as a Metaphor 
 
Three metaphors come to mind when considering the Internet 
"philosophically". 
The Internet as a Chaotic Library 
1. The Problem of Cataloguing
The Internet is an assortment of billions of pages containing information. 
Some of them are visible and others are generated from hidden databases by 
users' requests ("Invisible Internet"). 
The Internet displays no discernible order, classification, or 
categorization. As opposed to "classical" libraries, no one has invented a 
cataloguing standard (remember Dewey?). This is so needed that it is 
amazing that it has not been invented yet. Some sites indeed apply the 
Dewey Decimal Syatem (Suite101). Others default to a directory structure 
(Open Directory, Yahoo!, Look Smart and others). 
Had such a standard existed (an agreed upon numerical cataloguing method) - 
each site would have self-classified. Sites would have an interest to do so 
to increase their penetration rates and their visibility. This, naturally, 
would have eliminated the need for today's clunky, incomplete and (highly) 
inefficient search engines. 
A site whose number starts with 900 will be immediately identified as 
dealing with history and multiple classification will be encouraged to 
allow finer cross-sections to emerge. An example of such an emerging 
technology of "self classification" and "self-publication" (though limited 
to scholarly resources) is the "Academic Resource Channel" by Scindex. 
Users will not be required to remember reams of numbers. Future browsers 
will be akin to catalogues, very much like the applications used in modern 
day libraries. Compare this utopia to the current dystopy. Users struggle 
with reams of irrelevant material to finally reach a partial and 
disappointing destination. At the same time, there likely are web sites 
which exactly match the poor user's needs. Yet, what currently determines 
the chances of a happy encounter between user and content - are the whims 
of the specific search engine used and things like meta-tags, headlines, a 
fee paid, or the right opening sentences. 
2. Screen versus Page
The computer screen, because of physical limitations (size, the fact that 
it has to be scrolled) fails to effectively compete with the printed page. 
The latter is still the most ingenious medium yet invented for the storage 
and release of textual information. Granted: a computer screen is better at 
highlighting discrete units of information. So, this draws the batlle 
lines: structures (printed pages) versus units (screen), the continuous and 
easily reversible versus the discrete. 
The solution is an efficient way to translate computer screens to printed 
matter. It is hard to believe, but no such thing exists. Computer screens 
are still hostile to off-line printing. In other words: if a user copies 
information from the Internet to his Word Processor (or vice versa, for 
that matter) - he ends up with a fragmented, garbage-filled and non-
aesthetic document. 
Very few site developers try to do something about it - even fewer succeed. 
3. The Internet and the CD-ROM
One of the biggest mistakes of content suppliers is that they do not mix 
contents or have a "static-dynamic interaction". 
The Internet can now easily interact with other media (especially with 
audio CDs and with CD-ROMs) - even as the user surfs. 
Examples abound: 
A shopping catalogue can be distributed on a CD-ROM by mail. The Internet 
Site will allow the user to order a product previously selected from the 
catalogue, while off-line. The catalogue could also be updated through the 
site (as is done with CD-ROM encyclopedias). 
The advantages of the CD-ROM are clear: very fast access time (dozens of 
times faster than the access to a site using a dial up connection) and a 
data storage capacity tens of times bigger than the average website. 
Another example: a CD-ROM can be distributed, containing hundreds of 
advertisements. The consumer will select the ad that he wants to see and 
will connect to the Internet to view a relevant video. 
He could then also have an interactive chat (or a conference) with a 
salesperson, receive information about the company, about the ad, about the 
advertising agency which created the ad - and so on. 
CD-ROM based encyclopedias (such as the Britannica, Encarta, Grolier) 
already contain hyperlinks which carry the user to sites selected by an 
Editorial Board. 
But CD-ROMs are probably a doomed medium. This industry chose to emphasize 
the wrong things. Storage capacity increased exponentially and, within a 
year, desktops with 80 Gb hard disks will be common. Moreover, the Network 
Computer - the stripped down version of the personal computer - will put at 
the disposal of the average user terabytes in storage capacity and the 
processing power of a supercomputer. What separates computer users from 
this utopia is the communication bandwidth. With the introduction of radio, 
statellite, ADSL broadband services, cable modems and compression methods - 
video (on demand), audio and data will be available speedily and 
plentifully. 
The CD-ROM, on the other hand, is not mobile. It requires installation and 
the utilization of sophisticated hardware and software. This is no user 
friendly push technology. It is nerd-oriented. As a result, CD-ROMs are not 
an immediate medium. There is a long time lapse between the moment they are 
purchased and the moment the first data become accessible to the user. 
Compare this to a book or a magazine. Data in these oldest of media is 
instantly available to the user and allows for easy and accurate "back" and 
"forward" functions. 
Perhaps the biggest mistake of CD-ROM manufacturers has been their 
inability to offer an integrated hardware and software package. CD-ROMs are 
not compact. A Walkman is a compact hardware-cum-software package. It is 
easily transportable, it is thin, it contains numerous, user-friendly, 
sophisticated functions, it provides immediate access to data. So does the 
discman or the MP3-man. This cannot be said of the CD-ROM. By tying its 
future to the obsolete concept of stand-alone, expensive, inefficient and 
technologically unreliable personal computers - CD-ROMs have sentenced 
themselves to oblivion (with the possible exception of reference material). 
4. On-line Reference Libraries
These already exist. A visit to the on-line Encyclopaedia Britannica 
exemplifies some of the tremendous, mind boggling possibilities: 
Each entry is hyperlinked to sites on the Internet which deal with the same 
subject matter. The sites are carefully screened (though more detailed 
descriptions of each site should be available - they could be prepared 
either by the staff of the encyclopaedia or by the site owner). Links are 
available to data in various forms, including audio and video. Everything 
can be copied to the hard disk or to CD-ROMs. 
This is a new conception of a knowledge centre - not just an assortment of 
material. It is modular, can be added on and subtracted from. It can be 
linked to a voice Q&A centre. Queries by subscribers can be answered by e-
mail, by fax, posted on the site, hard copies can be sent by post. This 
"Trivial Pursuit" service could be very popular - there is considerable 
appetite for "Just in Time Information". The Library of Congress - together 
with a few other libraries - is in the process of making just such a 
service available to the public (CDRS - Collaborative Digital Reference 
Service). 
5. The Feedback Option
Hard to believe, but very few sites encourage their guests to express an 
opinion about the site, its contents and its aesthetics. This indicates an 
ossified mode of thinking about the most dynamic mass medium ever created, 
the only interactive mass medium yet. Each site must absolutely contain 
feedback and rating questionnaires. It has the side benefit of creating a 
database of the visitors to the site. 
Moreover, each site can easily become a "knowledge centre". 
Let us consider a site dedicated to advertising and marketing: 
It can contain feedback questionnaires (what do you think about the site, 
suggestions for improvement, mailto and leave message facilities, etc.) 
It can contain rating questionnaires (rate these ads, these TV or radio 
shows, these advertising campaigns). 
It can allocate some space to clients to create their home pages in (these 
home pages could lead to their sites, to other sites, to other sections of 
the host site - and, in any case, will serve as a display of the creative 
talent of the site owners). This will give the site owners a picture of the 
distribution of the areas of interest of the visitors to the site. 
The site can include statistical, tracking and counter software. 
Such a site can refer to hundreds of useful shareware applications (which 
deal with different aspects of advertising and marketing, for instance). 
Developers of applications will be able to use the site to promote their 
products. Other practical applications could also be referred to from - or 
reside on - the site (browsers, games, search engines). 
And all this can be organized in a portal structure (for instance, by 
adopting the open software of the Open Directory Project).
6. Internet Derived CD-ROMS
The Internet is an enormous reservoir of freely available, public domain, 
information. 
With a minimal investment, this information can be gathered into coherent, 
theme oriented, cheap CD-ROMs. Each such CD-ROM can contain: 
   Addresses of web sites specific to the subject matter 
?	The first pages of each of these sites 
?	Hyperlinks to each of the sites 
?	A browser 
?	Access to all the important search engines 
?	Recommended search strings (it is extremely difficult to formulate a 
successful search in the Internet, it takes expertise. "Ready-made 
searches" will be a hit in the future, as the number of sites grows) 
?	A dictionary of professional terms, a speller and a thesaurus 
?	A list of general reference sites 
?	Shareware specific to the field 
7. Publishing
The Internet is the world's largest "publisher", by far. It "publishes" 
FAQs (Frequent Answers and Questions regarding almost every technical 
matter in the world), e-zines (electronic versions of magazines, not a very 
profitable pursuit), the electronic versions of dailies (together with on-
line news and information services), reference and other e-books, 
monographs, articles and minutes of discussions ("threads"), among other 
types of material. 
Publishing an e-zine has a few advantages: it promotes the sales of the 
printed edition, it helps to sign on subscribers and it leads to the sale 
of advertising space. The electronic archive function (see next section) 
saves the need to file back issues, the space required to do so and the 
irritating search for data items. 
The future trend is a combined subscription: electronic (mainly for the 
archival value and the ability to hyperlink to additional information) and 
printed (easier to browse current issue). 
The electronic daily presents other advantages: 
It allows for immediate feedback and for flowing, almost real-time, 
communication between writers and readers. The electronic version, 
therefore, acquires a gyroscopic function: a navigation instrument, always 
indicating deviations from the "right" course. The content can be instantly 
updated and immediacy has its premium (remember the Lewinsky affair?). 
Strangely, this (conventional) field was the first to develop a "virtual 
reality" facet. There are virtual "magazine stalls". They look exactly like 
the real thing and the user can buy a paper using his mouse. 
Specialty hand held devices already allow for downloading and storage of 
vast quantities of data (up to 4000 print pages). The user gains access to 
libraries containing hundreds of texts, adapted to be downloaded, stored 
and read by the specific device. Again, a convergence of standards is to be 
expected in this field as well (the final contenders will probably be 
Adobe's PDF against Microsoft's MS-Reader). 
Broadly, e-books are treated either as: 
Continuation of print books (p-books) by other means  
or as  
A whole new publishing universe. 
Since p-books are a more convenient medium then e-books - they will prevail 
in any straightforward "medium replacement" or "medium displacement" 
battle. 
In other words, if publishers will persist in the simple and 
straightforward conversion of p-books to e-books - then e-books are doomed. 
They are simply inferior to the price, comfort, tactile delights, 
browseability and scanability of p-books. 
But e-books - being digital - open up a vista of hitherto neglected 
possibilities. These will only be enhanced and enriched by the introduction 
of e-paper and e-ink. Among them: 
?	Hyperlinks within the e-book and without it - to web content, 
reference works, etc. 
?	Embedded instant shopping and ordering links 
?	Divergent, user-interactive, decision driven plotlines 
?	Interaction with other e-books (using a wireless standard) - 
collaborative authoring 
?	Interaction with other e-books - gaming and community activities 
?	Automatically or periodically updated content 
?	Multimedia 
?	Database, Favourites and History Maintenance (reading habits, 
shopping habits, interaction with other readers, plot related 
decisions and much more) 
?	Automatic and embedded audio conversion and translation capabilities 
?	Full wireless piconetworking and scatternetworking capabilities 
The technology is still not fully there. Wars rage in both the wireless and 
the ebook realms. Platforms compete. Standards clash. Gurus debate. But 
convergence is inevitable and with it the e-book of the future. 
8. The Archive Function
The Internet is also the world's biggest cemetery: tens of thousands of 
deadbeat sites, still accessible - the "Ghost Sites" of this electronic 
frontier. 
This, in a way, is collective memory. One of the Internet's main functions 
will be to preserve and transfer knowledge through time. It is called 
"memory" in biology - and "archive" in library science. The history of the 
Internet is being documented by search engines (Google) and specialized 
services (Alexa) alike.
 
 
The Internet as a Collective Brain 
  
Drawing a comparison from the development of a human baby - the human race 
has just commenced to develop its neural system. 
The Internet fulfils all the functions of the Nervous System in the body 
and is, both functionally and structurally, pretty similar. It is 
decentralized, redundant (each part can serve as functional backup in case 
of malfunction). It hosts information which is accessible in a few ways, it 
contains a memory function, it is multimodal (multimedia - textual, visual, 
audio and animation). 
I believe that the comparison is not superficial and that studying the 
functions of the brain (from infancy to adulthood) - amounts to perusing 
the future of the Net itself. 
1. The Collective Computer
To carry the metaphor of "a collective brain" further, we would expect the 
processing of information to take place in the Internet, rather than inside 
the end-user's hardware (the same way that information is processed in the 
brain, not in the eyes). Desktops will receive the results and communicate 
with the Net to receive additional clarifications and instructions and to 
convey information gathered from their environment (mostly, from the user). 
This is part fo the philosophy of the JAVA programming language. It deals 
with applets - small bits of software - and links different computer 
platforms by means of software. 
Put differently: 
Future servers will contain not only information (as they do today) - but 
also software applications. The user of an application will not be forced 
to buy it. He will not be driven into hardware-related expenditures to 
accommodate the ever growing size of applications. He will not find himself 
wasting his scarce memory and computing resources on passive storage. 
Instead, he will use a browser to call a central computer. This computer 
will contain the needed software, broken to its elements (=applets, small 
applications). Anytime the user wishes to use one of the functions of the 
application, he will siphon it off the central computer. When finished - he 
will "return" it. Processing speeds and response times will be such that 
the user will not feel at all that it is not with his own software that he 
is working (the question of ownership will be very blurred in such a 
world). This technology is available and it provoked a heated debated about 
the future shape of the computing industry as a whole (desktops - really 
power packs - or network computers, a little more than dumb terminals). 
Applications are already offered to corporate users by ASPs (Application 
Service Providers). 
In the last few years, scientists put the combined power of the computers 
linked to the internet at any given moment to perform astounding feats of 
distributed parallel processing. Millions of PCs connected to the net co-
process signals from outer space, meteorological data and solve complex 
equations. This is a prime example of a collective brain in action. 
2. The Intranet - a Logical Extension of the Collective Computer
LANs (Local Area Networks) are no longer a rarity in corporate offices. 
WANs (wide Area Networks) are used to connect geographically dispersed 
organs of the same legal entity (branches of a bank, daughter companies, a 
sales force). Many LANs are wireless. 
The intranet / extranet and wireless LANs will be the winners. They will 
gradually eliminate both fixed line LANs and WANs. The Internet offers 
equal, platform-independent, location-independent and time of day - 
independent access to all the members of an organization.Sophisticated 
firewall security application protects the privacy and confidentiality of 
the intranet from all but the most determined and savvy hackers. 
The Intranet is an inter-organizational communication network, constructed 
on the platform of the Internet and which enjoys all its advantages. The 
extranet is open to clients and suppliers as well. 
The company's server can be accessed by anyone authorized, from anywhere, 
at any time (with local - rather than international - communication costs). 
The user can leave messages (internal e-mail or v-mail), access information 
- proprietary or public - from it and to participate in "virtual teamwork" 
(see next chapter). 
By the year 2002, a standard intranet interface will emerge. This will be 
facilitated by the opening up of the TCP/IP communication architecture and 
its availability to PCs. A billion USD will go just to finance intranet 
servers - or, at least, this is the median forecast. 
The development of measures to safeguard server routed inter-organizational 
communication (firewalls) is the solution to one of two obstacles to the 
institution of the Intranet. The second problem is the limited bandwidth 
which does not permit the efficient transfer of audio (not to mention 
video). 
It is difficult to conduct video conferencing through the Internet. Even 
the voices of discussants who use internet phones come out (slightly) 
distorted. 
All this did not prevent 95% of the Fortune 1000 from installing intranet. 
82% of the rest intend to install one by the end of this year. Medium to 
big size American firms have 50-100 intranet terminals per every internet 
one. 
At the end of 1997, there were 10 web servers per every other type of 
server in organizations. The sale of intranet related software was 
projected to multiply by 16 (to 8 billion USD) by the year 1999. 
One of the greatest advantages of the intranet is the ability to transfer 
documents between the various parts of an organization. Consider Visa: it 
pushed 2 million documents per day internally in 1996. 
An organization equipped with an intranet can (while protected by 
firewalls) give its clients or suppliers access to non-classified 
correspondence. This notion has its  charm. Consider a newspaper: it can 
give access to all the materials which were discarded by the editors. Some 
news are fit to print - yet are discarded because of  space limitations. 
Still, someone is bound to be interested. It costs the newspaper close to 
nothing (the material is, normally, already computer-resident) - and it 
might even generate added circulation and income. It can be even conceived 
as an "underground, non-commercial, alternative" newspaper for a wholly 
different readership. 
The above is but one example of the possible use of the intranet to 
communicate with the organization's consumer base. 
3. Mail and Chat
The Internet (its e-mail possibilities) is eroding traditional mail. The 
market share of the post office in conveying messages by regular mail has 
dwindled from 77% to 62% (1995). E-mail has expanded to capture 36% (up 
from 19%). 
90% of customers with on-line access use e-mail from time to time and 60% 
work with it regularly. More than 2 billion messages traverse the internet 
daily. 
E-mail applications are available as freeware and are included in all 
browsers. Thus, the Internet has completely assimilated what used to be a 
separate service, to the extent that many people make the mistake of 
thinking that e-mail is a feature of the Internet. Microsoft continues to 
incorporate previously independent applications in its browsers - a 
behaviour which led to the 1999 anti-trust lawsuit against it. 
The internet will do to phone calls what it has done to mail. Already there 
are applications (Intel's, Vocaltec's, Net2Phone) which enable the user to 
conduct a phone conversation through his computer. The voice quality has 
improved. The discussants can cut into each others words, argue and listen 
to tonal nuances. Today, the parties (two or more) engaging in the 
conversation must possess the same software and the same (computer) 
hardware. In the very near future, computer-to-regular phone applications 
will eliminate this requirement. And, again, simultaneous multi-modality: 
the user can talk over the phone, see his party, send e-mail, receive 
messages and transfer documents - without obstructing the flow of the 
conversation. 
The cost of transferring voice will become so negligible that free voice 
traffic is conceivable in 3-5 years. Data traffic will overtake voice 
traffic by a wide margin. 
This beats regular phones. 
The next phase will probably involve virtual reality. Each of the parties 
will be represented by an "avatar", a 3-D figurine generated by the 
application (or the user's likeness mapped into the software and 
superimposed on the the avatar). These figurines will be multi-dimensional: 
they will possess their own communication patterns, special habits, 
history, preferences - in short: their own "personality". 
Thus, they will be able to maintain an "identity" and a consistent pattern 
of communication which they will develop over time. 
Such a figure could host a site, accept, welcome and guide visitors, all 
the time bearing their preferences in its electronic "mind". It could 
narrate the news, like "Ananova" does. Visiting sites in the future is 
bound to be a much more pleasant affair. 
4. E-cash
In 1996, the four corporate giants (Visa, MasterCard, Netscape and 
Microsoft) agreed on a standard for effecting secure payments through the 
Internet: SET. Internet commerce is supposed to mushroom by a factor of 50 
to 25 billion USD. Site owners will be able to collect rent from passing 
visitors - or fees for services provided within the site. Amazon instituted 
an honour system to collect donations from visitors. Dedicated visitors 
will not be deterred by such trifles. 
5. The Virtual Organization
The Internet allows simultaneous communication between an almost unlimited 
number of users. This is coupled with the efficient transfer of multimedia 
(video included) files. 
This opens up a vista of mind boggling opportunities which are the real 
core of the Internet revolution: the virtual collaborative ("Follow the 
Sun") modes. 
Examples: 
A group of musicians will be able to compose music or play it - while 
spatially and temporally separated; 
Advertising agencies will be able to co-produce ad campaigns in a real time 
interactive mode; 
Cinema and TV films will be produced from disparate geographical spots 
through the teamwork of people who never meet, except through the net. 
These examples illustrate the concept of the "virtual community". Locations 
in space and time will no longer hinder a collaboration in a team: be it 
scientific, artistic, cultural, or for the provision of services (a virtual 
law firm or accounting office, a virtual consultancy network). 
Two on going developments are the virtual mall and the virtual catalogue. 
There are well over 300 active virtual malls in the Internet. They were 
frequented by 32.5 million shoppers, who shopped in them for goods and 
services in 1998. The intranet can also be thought of as a "virtual 
organization", or a "virtual business". 
The virtual mall is a computer "space" (pages) in the internet, wherein 
"shops" are located. These shops offer their wares using visual, audio and 
textual means. The visitor passes a gate into the store and looks through 
its offering, until he reaches a buying decision. Then he engages in a 
feedback process: he pays (with a credit card), buys the product and waits 
for it to arrive by mail. The manufacturers of digital products 
(intellectual property such as e-books or software) have begun selling 
their merchandise on-line, as file downloads. 
Yet, slow communications and limited bandwidth - constrain the growth 
potential of this mode of sale. Once solved - intellectual property will be 
sold directly from the net, on-line. Until such time, the intervention of 
the Post Office is still required. So, then virtual mall is nothing but a 
glorified computerized mail catalogue or Buying Channel, the only 
difference being the exceptionally varied inventory. 
Websites which started as "specialty stores" are fast transforming 
themselves into multi-purpose virtual malls. Amazon.com, for instance, has 
bought into a virtual pharmacy and into other virtual businesses. It is now 
selling music, video, electronics and many other products. It started as a 
bookstore. 
This contrasts with a much more creative idea: the virtual catalogue. It is 
a form of narrowcasting (as opposed to broadcasting): a surgically accurate 
targeting of potential consumer audiences. Each group of profiled consumers 
(no matter how small) is fitted with their own - digitally generated - 
catalogue. This is updated daily: the variety of wares on offer (adjusted 
to reflect inventory levels, consumer preferences and goods in transit) - 
and prices (sales, discounts, package deals) change in real time. 
The user will enter the site and there delineate his consumption profile 
and his preferences. A customized catalogue will be immediately generated 
for him. 
From then on, the history of his purchases, preferences and responses to 
feedback questionnaires will be accumulated and added to a database. 
Each catalogue generated for him will come replete with order forms. Once 
the user concluded his purchases, his profile will be updated. 
There is no technological obstacles to implementing this vision today - 
only administrative and legal ones. Big retail stores are not up to 
processing the flood of data expected to arrive. They also remain highly 
sceptical regarding the feasibility of the new medium. And privacy issues 
prevent data mining or the effective collection and usage of personal data. 
The virtual catalogue is a private case of a new internet off-shoot: the 
"smart (shopping) agents". These are AI applications with "long memories". 
They draw detailed profiles of consumers and users and then suggest 
purchases and refer to the appropriate sites, catalogues, or virtual malls. 
They also provide price comparisons and the new generation (NetBot) cannot 
be blocked or fooled by using differing product categories. 
In the future, these agents will refer also to real life retail chains and 
issue a map of the branch or store closest to an address specified by the 
user (the default being his residence). This technology can be seen in 
action in a few music sites on the web and is likely to be dominant with 
wireless internet appliances. The owner of an internet enabled (third 
generation) mobile phone is likely to be the target of geographically-
specific marketing campaigns, ads and special offers pertaining to his 
current location (as reported by his GPS - satellite Geographic Positioning 
System). 
6. Internet News
Internet news are advantaged. They can be frequently and dynamically 
updated (unlike static print news) and be always accessible (similar to 
print news), immediate and fresh. 
The future will witness a form of interactive news. A special "corner" in 
the site will be open to updates posted by the public (the equivalent of 
press releases). This will provide readers with a glimpse into the making 
of the news, the raw material news are made of. The same technology will be 
applied to interactive TVs. Content will be downloaded from the internet 
and be displayed as an overlay on the TV screen or in a square in a special 
location. The contents downloaded will be directly connected to the TV 
programming. Thus, the biography and track record of a football player will 
be displayed during a football match and the history of a country when it 
gets news coveage. 
 
 
Terra Internetica - Internet, an Unknown Continent 
  
This is an unconventional way to look at the Internet. Laymen and experts 
alike talk about "sites" and "advertising space". Yet, the Internet was 
never compared to a new continent whose surface is infinite. 
The Internet will have its own real estate developers and construction 
companies. The real life equivalents derive their profits from the scarcity 
of the resource that they exploit - the Internet counterparts will derive 
their profits from the tenants (the content). 
Two examples: 
A few companies bought "Internet Space" (pages, domain names, portals), 
developed it and make commercial use of it by: 
?	renting it out 
?	constructing infrastructure and selling it 
?	providing an intelligent gateway, entry point to the rest of the 
internet 
?	or selling advertising space which subsidizes the tenants (Yahoo!-
Geocities, Tripod and others). 
?	Cybersquatting (purchasing specific domain names identical to brand 
names in the "real" world) and then selling the domain name to an 
interested party 
Internet Space can be easily purchased or created. The investment is low 
and getting lower with the introduction of competition in the field of 
domain registration services and the increase in the number of top domains. 
Then, infrastructure can be erected - for a shopping mall, for free home 
pages, for a portal, or for another purpose. It is precisely this 
infrastructure that the developer can later sell, lease, franchise, or rent 
out. 
At the beginning, only members of the fringes and the avant-garde 
(inventors, risk assuming entrepreneurs, gamblers) invest in a new 
invention. The invention of a new communications technology is mostly 
accompanied by devastating silence. 
No one knows to say what are the optimal uses of the invention (in other 
words, what is its future). Many - mostly members of the scientific and 
business elites - argue that there is no real need for the invention and 
that it substitutes a new and untried way for old and tried modes of doing 
the same thing (so why assume the risk?) 
These criticisms are usually founded: 
To start with, there is, indeed, no need for the new medium. A new medium 
invents itself - and the need for it. It also generates its own market to 
satisfy this newly found need. 
Two prime examples are the personal computer and the compact disc. 
When the PC was invented, its uses were completely unclear. Its performance 
was lacking, its abilities limited, it was horribly user unfriendly. 
It suffered from faulty design, absent user comfort and ease of use and 
required considerable professional knowledge to operate. The worst part was 
that this knowledge was unique to the new invention (not portable). 
It reduced labour mobility and limited one's professional horizons. There 
were many gripes among those assigned to tame the new beast. 
The PC was thought of, at the beginning, as a sophisticated gaming machine, 
an electronic baby-sitter. As the presence of a keyboard was detected and 
as the professional horizon cleared it was thought of in terms of a 
glorified typewriter or spreadsheet. It was used mainly as a word processor 
(and its existence justified solely on these grounds). The spreadsheet was 
the first real application and it demonstrated the advantages inherent to 
this new machine (mainly flexibility and speed). Still, it was more (speed) 
of the same. A quicker ruler or pen and paper. What was the difference 
between this and a hand held calculator (some of them already had 
computing, memory and programming features)? 
The PC was recognized as a medium only 30 years after it was invented with 
the introduction of multimedia software. All this time, the computer 
continued to spin off markets and secondary markets, needs and professional 
specialities. The talk as always was centred on how to improve on existing 
markets and solutions. 
The Internet is the computer's first important breakthrough. Hitherto the 
computer was only quantitatively different - the multimedia and the 
Internet have made it qualitatively superior, actually, sui generis, 
unique. 
This, precisely, is the ghost haunting the Internet: 
It has been invented, is maintained and is operated by computer 
professionals. For decades these people have been conditioned to think in 
Olympic terms: more, stronger, higher. Not: new, unprecedented, non-
existent. To improve - not to invent. They stumbled across the Internet - 
it invented itself despite its own creators. 
Computer professionals (hardware and software experts alike) - are linear 
thinkers. The Internet is non linear and modular. 
It is still the age of hackers. There is still a lot to be done in 
improving technological prowess and powers. But their control of the 
contents is waning and they are being gradually replaced by communicators, 
creative people, advertising executives, psychologists and the totally 
unpredictable masses who flock to flaunt their home pages. 
These all are attuned to the user, his mental needs and his information and 
entertainment preferences. 
The compact disc is a different tale. It was intentionally invented to 
improve upon an existing technology (basically, Edison's Gramophone). 
Market-wise, this was a major gamble: the improvement was, at first, 
debatable (many said that the sound quality of the first generation of 
compact discs was inferior to that of its contemporaneous record players). 
Consumers had to be convinced to change both software and hardware and to 
dish out thousands of dollars just to listen to what the manufacturers 
claimed was better quality Bach. A better argument was the longer life of 
the software (though contrasted with the limited life expectancy of the 
consumer, some of the first sales pitches sounded absolutely morbid). 
The computer suffered from unclear positioning. The compact disc was very 
clear as to its main functions - but had a rough time convincing the 
consumers. 
Every medium is first controlled by the technical people. Gutenberg was a 
printer - not a publisher. Yet, he is the world's most famous publisher. 
The technical cadre is joined by dubious or small-scale entrepreneurs and, 
together, they establish ventures with no clear vision, market-oriented 
thinking, or orderly plan of action. The legislator is also dumbfounded and 
does not grasp what is happening - thus, there is no legislation to 
regulate the use of the medium. Witness the initial confusion concerning 
copyrighted software and the copyrights of ROM embedded software. Abuse or 
under-utilization of resources grow. Recall the sale of radio frequencies 
to the first cellular phone operators in the West - a situation which 
repeats itself in Eastern and Central Europe nowadays. 
But then more complex transactions - exactly as in real estate in "real 
life" - begin to emerge. 
This distinction is important. While in real life it is possible to sell an 
undeveloped plot of land - no one will buy "pages". The supply of these is 
unlimited - their scarcity (and, therefore, their virtual price) is zero. 
The second example involves the utilization of a site - rather than its 
mere availability. 
A developer could open a site wherein first time authors will be able to 
publish their first manuscript - for a fee. Evidently, such a fee will be a 
fraction of what it would take to publish a "real life" book. The author 
could collect money for any downloading of his book - and split it with the 
site developer. The potential buyers will be provided with access to the 
contents and to a chapter of the books. This is currently being done by a 
few fledgling firms but a full scale publishing industry has not yet 
developed. 
 
 
The Life of a Medium 
  
The internet is simply the latest in a series of networks which 
revolutionized our lives. A century before the internet, the telegraph, the 
railways, the radio and the telephone have been similarly heralded as 
"global" and transforming. 
Every medium of communications goes through the same evolutionary cycle: 
Anarchy 
The Public Phase 
At this stage, the medium and the resources attached to it are very cheap, 
accessible, under no regulatory constraints. The public sector steps in: 
higher education institutions, religious institutions, government, not for 
profit organizations, non governmental organizations (NGOs), trade unions, 
etc. Bedevilled by limited financial resources, they regard the new medium 
as a cost effective way of disseminating their messages. 
The Internet was not exempt from this phase which ended only a few years 
ago. It started with a complete computer anarchy manifested in ad hoc 
networks, local networks, networks of organizations (mainly universities 
and organs of the government such as DARPA, a part of the defence 
establishment, in the USA). Non commercial entities jumped on the bandwagon 
and started sewing these networks together (an activity fully subsidized by 
government funds). The result was a globe encompassing network of academic 
institutions. The American Pentagon established the network of all 
networks, the ARPANET. Other government departments joined the fray, headed 
by the National Science Foundation (NSF) which withdrew only lately from 
the Internet. 
The Internet (with a different name) became semi-public property - with 
access granted to the chosen few. 
Radio took precisely this course. Radio transmissions started in the USA in 
1920. Those were anarchic broadcasts with no discernible regularity. Non 
commercial organizations and not for profit organizations began their own 
broadcasts and even created radio broadcasting infrastructure (albeit of 
the cheap and local kind) dedicated to their audiences. Trade unions, 
certain educational institutions and religious groups commenced "public 
radio" broadcasts. 
The Commercial Phase 
When the users (e.g., listeners in the case of the radio, or owners of PCs 
and modems in the example of the Internet) reach a critical mass - the 
business sector is alerted. In the name of capitalist ideology (another 
religion, really) it demands "privatization" of the medium. This harps on 
very sensitive strings in every Western soul: the efficient allocation of 
resources which is the result of competition, corruption and inefficiency 
naturally associated with the public sector ("Other People's Money" - OPM), 
the ulterior motives of members of the ruling political echelons (the 
infamous American Paranoia), a lack of variety and of catering to the 
tastes and interests of certain audiences, the equation private enterprise 
= democracy and more. 
The end result is the same: the private sector takes over the medium from 
"below" (makes offers to the owners or operators of the medium - that they 
cannot possibly refuse) - or from "above" (successful lobbying in the 
corridors of power leads to the appropriate legislation and the medium is 
"privatized"). 
Every privatization - especially that of a medium - provokes public 
opposition. There are (usually founded) suspicions that the interests of 
the public were compromised and sacrificed on the altar of 
commercialization and rating. Fears of monopolization and cartelization of 
the medium are evoked - and justified, in due time. Otherwise, there is 
fear of the concentration of control of the medium in a few hands. All 
these things do happen - but the pace is so slow that the initial fears are 
forgotten and public attention reverts to fresher issues. 
A new Communications Act was legislated in the USA in 1934. It was meant to 
transform radio frequencies into a national resource to be sold to the 
private sector which will use it to transmit radio signals to receivers. In 
other words: the radio was passed on to private and commercial hands. 
Public radio was doomed to be marginalized. 
The American administration withdrew from its last major involvement in the 
Internet in April 1995, when the NSF ceased to finance some of the networks 
and, thus, privatized its hitherto heavy involvement in the net. 
A new Communications Act was legislated in 1996. It permitted "organized 
anarchy". It allowed media operators to invade each other's territories. 
Phone companies will be allowed to transmit video and cable companies will 
be allowed to transmit telephony, for instance. This is all phased over a 
long period of time - still, it is a revolution whose magnitude is 
difficult to gauge and whose consequences defy imagination. It carries an 
equally momentous price tag - official censorship. "Voluntary censorship", 
to be sure, somewhat toothless standardization and enforcement authorities, 
to be sure - still, a censorship with its own institutions to boot. The 
private sector reacted by threatening litigation - but, beneath the surface 
it is caving in to pressure and temptation, constructing its own censorship 
codes both in the cable and in the internet media. 
Institutionalization 
This phase is the next in the Internet's history, though, it seems, 
unbeknownst to it. 
It is characterized by enhanced activities of legislation. Legislators, on 
all levels, discover the medium and lurch at it passionately. Resources 
which were considered "free", suddenly are transformed to "national 
treasures not to be dispensed with cheaply, casually and with frivolity". 
It is conceivable that certain parts of the Internet will be "nationalized" 
(for instance, in the form of a licensing requirement) and tendered to the 
private sector. Legislation will be enacted which will deal with permitted 
and disallowed content (obscenity? incitement? racial or gender bias?) 
No medium in the USA (not to mention the wide world) has eschewed such 
legislation. There are sure to be demands to allocate time (or space, or 
software, or content, or hardware) to "minorities", to "public affairs", to 
"community business". This is a tax that the business sector will have to 
pay to fend off the eager legislator and his nuisance value. 
All this is bound to lead to a monopolization of hosts and servers. The 
important broadcast channels will diminish in number and be subjected to 
severe content restrictions. Sites which will not succumb to these 
requirements - will be deleted or neutralized. Content guidelines 
(euphemism for censorship) exist, even as we write, in all major content 
providers (CompuServe, AOL, Geocities, Tripod, Prodigy). 
The Bloodbath 
This is the phase of consolidation. The number of players is severely 
reduced. The number of browser types will be limited to 2-3 (Netscape, 
Microsoft and which else?). Networks will merge to form privately owned 
mega-networks. Servers will merge to form hyper-servers run on 
supercomputers in "server farms". The number of ISPs will be considerably 
cut. 
50 companies ruled the greater part of the media markets in the USA in 
1983. The number in 1995 was 18. At the end of the century they will number 
6. 
This is the stage when companies - fighting for financial survival - strive 
to acquire as many users/listeners/viewers as possible. The programming is 
shallowed to the lowest (and widest) common denominator. Shallow 
programming dominates as long as the bloodbath proceeds. 
From Rags to Riches 
Tough competition produces four processes: 
1. A Major Drop in Hardware Prices
This happens in every medium but it doubly applies to a computer-dependent 
medium, such as the Internet. 
Computer technology seems to abide by "Moore's Law" which says that the 
number of transistors which can be put on a chip doubles itself every 18 
months. As a result of this miniaturization, computing power quadruples 
every 18 months and an exponential series ensues. Organic-biological-DNA 
computers, quantum computers, chaos computers - prompted by vast profits 
and spawned by inventive genius will ensure the longevity and continued 
applicability of Moore's Law. 
The Internet is also subject to "Metcalf's Law". 
It says that when we connect N computers to a network - we get an increase 
of N to the second power in its computing / processing power. And these N 
computers are more powerful every year, according to Moore's Law. 
The growth of computing powers in networks is a multiple of the effects of 
the two laws. More and more computers with ever increasing computing power 
get connected and create an exponential 16 times growth in the network's 
computing power every 18 months. 
2. Free Availability of Software and Connection
This is prevalent in the Net where even potentially commercial software can 
be downloaded for free. In many countries television viewers still pay for 
television broadcasts - but in the USA and many other countries in the 
West, the basic package of television channels comes free of charge. 
As users / consumers form a habit of using (or consuming) the software - it 
is commercialized and begins to carry a price tag. This is what happened 
with the advent of cable television: contents are sold for subscription and 
usage (Pay Per View - PPV) fees. 
Gradually, this is what will happen to most of the sites and software on 
the Net. Those which survive will begin to collect usage fees, access fees, 
subscription fees, downloading fees and other, appropriately named, fees. 
These fees are bound to be low - but it is the principle that counts. Even 
a few cents per transaction will accumulate to hefty sums with the traffic 
which will characterize the Net (or, at least its more popular locales). 
Adverising revenues will allow ISPs to offer free communication and storage 
volume. Gradually, connect time charges imposed by the phone companies will 
be eroded by tough competition from the likes of the cable companies. 
Accessing the internet might well be free of all charges in 10 years time. 
3. Increased User Friendliness
As long as the computer is less user friendly and less reliable 
(predictable) than television - less of a black box - its potential (and 
its future) is limited. Television attracts 3.5 billion users daily. The 
Internet will attract - under the most exuberant scenario - less than one 
tenth of this number of people. The only reasons for this disparity are 
(the lack of) user friendliness and reliability. Even browsers, among the 
most user friendly applications ever - are not sufficiently so. The user 
still needs to know how to use a keyboard and must possess some basic 
acquaintance with the operating system. 
The more mature the medium, the more friendly it becomes. Finally, it will 
be operated using speech or common language. There will be room left for 
user "hunches" and built in flexible responses. 
4. Social Taxes
Sooner or later, the business sector has to mollify the God of public 
opinion by offerings of political and social nature. The Internet is an 
affluent, educated, yuppie medium. It necessitates a control of the English 
language, live interest in information and its various uses (scientific, 
commercial, other), a lot of resources (free time, money to invest in 
hardware, software and connect time). It empowers - and thus deepens the 
divide between the haves and have-nots, the knowing and the ignorant, the 
computer illiterate. 
In short: the Internet is an elitist medium. Publicly, this is an unhealthy 
posture. "Internetophobia" is already discernible. People (and politicians) 
talk about how unsafe the Internet is and about its possible uses for 
racial, sexist and pornographic purposes. The wider public is in a state of 
awe. 
So, site builders and owners will do well to begin to improve their image: 
provide free access to schools and community centres, bankroll internet 
literacy classes, freely distribute contents and software to educational 
institutions, collaborate with researchers and social scientists and 
engineers. 
In short: encourage the view that the Internet is a medium catering to the 
needs of the community and the underprivileged, a mostly altruist 
endeavour. This also happens to make good business sense by educating a 
future generation of users. He who visited a site when a student, free of 
charge - will pay to do so when made an executive. Such a user will also 
pass on the information within and without his organization. This is called 
media exposure. 
The future will, no doubt, witness public Internet terminals, subsidized 
ISP accounts, free Internet classes and an alternative "non-commercial, 
public" approach to the Net. 
 
 
The Internet: Medium or Chaos? 
  
There has never been a medium like the Internet. The way it has formed, the 
way it was (not) managed, its hardware-software-communications 
specifications - are all unique. 
No Government 
The Internet has no central (or even decentralized) structure. In reality, 
it hardly has a structure at all. It is a collection of 16 million 
computers (end 1996) connected through thousands of networks. There are 
organizations which purport to set Internet standards (like the 
aforementioned ISOC, or the domain setting ICANN) - but they are all 
voluntary organizations, with no binding legal, enforcement, or 
adjudication powers. The result is often mayhem. 
Many erroneously call the Internet the first democratic medium. Yet, it 
hardly qualifies as a medium and by no stretch of terminology is it 
democratic. Democracy has institutions, hierarchies, order. The Internet 
has none of these things. There are some vague understandings as to what is 
and is not allowed. This is a "code of honour" (more reminiscent of the 
Sicilian Mob than of the British Parliament, let's say). Violations are 
punished by excommunication (of the violating site or person). 
The Internet has culture - but no education. Freedom of Speech is 
entrenched. Members of this virtual community react adversely to ideas of 
censorship, even when applied to hard core porno. In 1999, hackers hacked 
major government sites following an FBI initiative against hacking-related 
crimes. Government initiatives (in the USA, in France, the lawsuit against 
the General Manager of AOL in Germany) are acutely criticized. In the 
meantime, the spirit of the Internet prevails: the small man's medium. What 
seems to be emerging, though, is self censorship by content providers (such 
as AOL and CompuServe). 
Independence 
The Internet is not dependent upon a given hardware or software. True, it 
is accessible only through computers and there are dominant browsers. 
But the Internet accommodates any digital (bit transfer) platform. Internet 
will be incorporated in the future into portable computers, palmtops, PDAs, 
mobile phones, cable television, telephones (with voice interface), home 
appliances and even wrist watches. It will be accessible to all, regardless 
of hardware and software. 
The situation is, obviously, different with other media. There is standard 
hardware (the television set, the radio receiver, the digital print 
equipment). Data transfer modes are standardized as well. The only variable 
is the contents - and even this is standardized in an age of American 
cultural imperialism. Today, one can see the same television programs all 
over the globe, regardless of cultural or geographical differences. 
Here is a reasonable prognosis for the Internet: 
It will "broadcast" (it is, of course, a PULL medium, not a PUSH medium - 
see next chapter) to many kinds of hardware. Its functions will be 
controlled by 2-5 very common software applications. But it will differ 
from television in that contents will continue to be decentralized: every 
point on the Net is a potential producer of content at low cost. This is 
the equivalent of producing a talk show using a single home video camera. 
And the contents will remain varied. 
Naturally, marketing content (sites) will remain an expensive art. Sites 
will also be richer or poorer, in accordance with the investment made in 
them. 
Non Linearity and Functional Modularity 
The Internet is the first medium in human history that is non-linear and 
totally modular. 
A television program is broadcast from a transmitter, through the airwaves 
to a receiver (=the television set). The viewer sits opposite this receiver 
and passively watches. This is an entirely linear process. The Internet is 
different: 
When communicating through the Internet, there is no way to predict how the 
information will reach its destination. The routing of information through 
the network is completely random, very much like the principle governing 
the telephony system (but on a global scale). The latter is not a point-to-
point linear network. Rather, it is a network of networks. Our voice is 
transmitted back and forth inside a gigantic maze of copper wires and optic 
fibres. It seeps through any available wire - until it reaches its 
destination. 
It is the same with the Internet. 
Information is divided to packets. An address is attached to each packet 
and - using the TCP/IP data transfer protocol - is dispatched to roam this 
worldwide labyrinth. But the path from one neighbourhood of London to 
another may traverse Japan. 
The really ingenious thing about the Internet is that each computer (each 
receiver or end user) indeed burdens the system by imposing on it its 
information needs (as is the case with other media) - but it also assists 
in the task of pushing information packets on to their destinations. It 
seems that this contribution to the system outweighs the burdens imposed 
upon it. 
The network has a growth potential which is always bigger than the number 
of its users. It is as though television sets assisted in passing the 
signals received by them to other television sets. Every computer which is 
a member of the network is both a message (content) and a medium (active 
information channel), both a transmitter and a receiver. If 30% of all 
computers on the Net were to crash - there will be no operational impact 
(there is enormous built in redundancy). Obviously, some contents will no 
longer be available (information channels will be affected). 
The interactivity of this medium is a guarantee against the monopolization 
of contents. Anyone with a thousand dollars can launch his/her own 
(reasonably sophisticated) site, accessible to all other Internet users. 
Space is available through home page providers. 
The name of the game is no longer the production - it is the creative 
content (design), the content itself and, above all, the marketing of the 
site. 
The Internet is an infinite and unlimited resource. This goes against the 
grain of the most basic economic concept (of scarcity). Each computer that 
joins the Internet strengthens it exponentially - and tens of thousands 
join daily. The Internet infrastructure (maybe with the exception of 
communication backbones) can accommodate an annual growth of 100% to the 
year 2020. It is the user who decides whether to increase the Internet's 
infrastructure by connecting his computer to it. By comparison: it is as 
though it were possible to produce and to broadcast radio programmes from 
every radio receiver. Each computer is a combination of studio and 
transmitter (on the Internet). 
In reality, there is no other interactive medium except the Internet. Cable 
TV does not allow two-way data transfer (from user to cable operator). If 
the user wants to buy a product - he has to phone. Interactive television 
is an abject failure (the Sony and TCI experiments were terminated). This 
all is notwithstanding the combining of the Internet with satellite 
capabilities (VSAT) or with the revenant digital television. 
The television screen is inferior when compared to the computer screen. 
Only the Internet is there as a true two-way possibility. The technological 
problems that besieged it are slowly dissipating. 
The Internet allows for one-dimensional and bi - dimensional interactivity. 
One-dimensional interactivity: fill in and dispatch a form, send and 
receive messages (through e-mail or v-mail). 
Two-dimensional interactivity: to talk to someone while both parties work 
on an application, to see your conversant, to talk to him and to transfer 
documents to him for his perusal as the conversation continues apace. 
This is no longer science fiction. In less than five years this will be as 
common as the telephone - and it will have a profound effect on the 
traditional services provided by the phone companies. Internet phones, 
Internet videophones - they will be serious competitors and the phone 
companies are likely to react once they begin to feel the heat. This will 
happen when the Internet will acquire black box features. Phone companies, 
software giants and cable TV operators are likely to end up owning big 
chunks of the lucrative future market of the Net. 
The Solitary Medium 
The Internet is NOT a popular medium. It is the medium of affluent 
executives who fully master the English language, as part of a wider 
general education. 
Alternatively, it is the medium of academia (students, lecturers), or of 
children of the former, well-to-do group. In any case, it is not the medium 
of the "wide public". It is also a highly individualistic medium. 
The Internet was an initiative of the DOD (Department of Defence in the 
USA). It was later "requisitioned" by the National science Fund (NSF) in 
the USA. This continuous involvement of the administration came to an end 
in 1995 when the medium was "privatized". 
This "privatization" was a recognition of the civilian roots of the 
Internet. It was - and is still being - formed by millions of information-
intoxicated users. They formed networks to exchange bits and pieces of 
mutual interest. Thus, as opposed to all other media, the Internet was not 
invented, nor was its market. The inventors of the telephone, the 
telegraph, the radio, the television and the compact disc - all invented 
previously non-existent markets for their products. It took time, effort 
and money to convince consumers that they needed these "gadgets". 
By contrast, the Internet was invented by its own consumers and so was the 
market for it. Only when the latter was fully forged did producers and 
businessmen join in. Microsoft began to hesitantly test the internet waters 
only in 1995! 
On Line Memories 
The Internet is the only medium with online memory, very much like the 
human brain. The memories of these two - the Net and the Brain - are 
immediately accessible. In both, it is stored in sites and in both, it does 
not grow old or is eliminated. It is possible to find sites which 
commemorate events the same way that the human mind registers them. This is 
Net Memory. The history of a site can be reviewed. The Library of Congress 
stores the consecutive development phases of sites. The Internet is an 
amazing combination of data processing software, data, a record of all the 
activities which took place in connection with the data and the memory of 
these records. Only the human brain is recalled by these capacities: one 
language serves all these functions, the language of the neurones. 
There is a much clearer distinction even in computers (not to mention more 
conventional media, such as television). 
Raw English - the Language of Raw Materials 
The following - apparently trivial - observation is critical: 
All the other media provide us with processed, censored, "clean" content. 
The Internet is a medium of raw materials, partly well organized (the rough 
equivalent of a newspaper) - and partly still in raw form, yesterday's 
supper. 
This is a result of the immediate and absolute access afforded each user: 
access to programming and site publishing tools - as well as access to 
computer space on servers. This leads to varying degrees of quality of 
contents and content providers and this, in turn, prevents monopolization 
and cartelization of the information supply channels. 
The users of the Internet are still undecided: do they prefer drafts or 
newspapers. They frequent well designed sites. There are even design 
competitions and awards. But they display a preference for sites that are 
constantly updated (i.e. closer in their nature to a raw material - rather 
than to a finished product). They prefer sites from which they can download 
material to quietly process at home, alone, on their PCs, at their leisure. 
Even the concept of "interactivity" points at a preference for raw 
materials with which one can interact. For what is interactivity if not the 
active involvement of the user in the creation of content? 
The Internet users love to be involved, to feel the power in their 
fingertips, they are all addicted to one form of power or another. 
Similarly, a car completely automatically driven and navigated is not 
likely to sell well. Part of the experience of driving - the sensation of 
power ("power stirring") - is critical to the purchase decision. 
It is not in vain that the metaphor for using the Internet is "surfing" 
(and not, let's say, browsing). 
The problem is that the Internet is still predominantly an English language 
medium (though it is fast changing). It discriminates against those whose 
mother tongue is different. All software applications work best in English. 
Otherwise they have to be adapted and fitted with special fonts (Hebrew, 
Arabic, Japanese, Russian and Chinese - each present a different set of 
problems to overcome). This situation might change with the attainment of a 
critical mass of users (some say, 2 million per non-Anglophone country). 
Comprehensive (Virtual) Reality 
This is the first (though, probably, not the last) medium which allows the 
user to conduct his whole life within its boundaries. 
Television presents a clear division: there is a passive viewer. His task 
is to absorb information and subject it to minimal processing. The Internet 
embodies a complete and comprehensive (virtual) reality, a full fledged 
alternative to real life. 
The illusion is still in its infancy - and yet already powerful. 
The user can talk to others, see them, listen to music, see video, purchase 
goods and services, play games (alone or with others scattered around the 
globe), converse with colleagues, or with users with the same hobbies and 
areas of interest, to play music together (separated by time and space). 
And all this is very primitive. In ten years time, the Internet will offer 
its users the option of video conferencing (possibly, three dimensional, 
holographic). The participants' figures will be projected on big screens. 
Documents will be exchanged, personal notes, spreadsheets, secret 
counteroffers. 
Virtual Reality games will become reality in less time. Special end-user 
equipment will make the player believe that he, actually, is part of the 
game (while still in his room). The player will be able to select an image 
borrowed from a database and it will represent him, seen by all the other 
players. Everyone will, thus, end up invading everyone else's private space 
- without encroaching on his privacy! 
The Internet will be the medium of choice for phone and videophone 
communication (including conferencing). 
Many mundane activities will be done through Internet: banking, shopping 
for standard items, etc. 
The above are examples to the Internet's power and ability to replace our 
reality in due time. A world out there will continue to exist - but, more 
and more we will interact with it through the enchanted interface of the 
Net. 
 
 
A Brave New Net 
  
The future of a medium in the making is difficult to predict. Suffice it to 
mention the ridiculous prognoses which accompanied the PC (it is nothing 
but a gaming gadget, it is a replacement for the electric typewriter, will 
be used only by business). The telephone also had its share of ludicrous 
statements: no one - claimed the "experts" would like to avoid eye contact 
while talking. Or television: only the Nazi regime seemed to have fully 
grasped its potential (in the Berlin 1936 Olympics). And Bill Gates thought 
that the internet has a very limited future as late as 1995!!! 
Still, this medium has a few characteristics which differentiate it from 
all its predecessors. Were these traits to be continuously and creatively 
exploited - a few statements can be made about the future of the Net with 
relative assurance. 
Time and Space Independence 
This is the first medium in history which does not require the simultaneous 
presence of people in space-time in order to facilitate the transfer of 
information. Television requires the existence of studio technicians, 
narrators and others in the transmitting side - and the availability of a 
viewer in the receiving side. The phone is dependent on the existence of 
two or more parties simultaneously. 
With time, tools to bridge the time gap between transmitter and receiver 
were developed. The answering machine and the video cassette recorder both 
accumulate information sent by a transmitter - and release it to a receiver 
in a different space and time. But they are discrete, their storage volume 
is limited and they do not allow for interaction with the transmitter. 
The Internet does not have these handicaps. 
It facilitates the formation of "virtual organizations / institutions / 
businesses/ communities". These are groups of users that communicate in 
different points in space and time, united by a common goal or interest. 
A few examples: 
The Virtual Advertising Agency 
A budget executive from the USA will manage the account of a hi-tech firm 
based in Sydney. He will work with technical experts from Israel and with a 
French graphics office. They will all file their work (through the 
intranet) in the Net, to be studied by the other members of this virtual 
group. These will enter the right site after clearing a firewall security 
software. They will all be engaged in flexiwork (flexible working times) 
and work from their homes or offices, as they please. Obviously, they will 
all abide by a general schedule. 
They will exchange audio files (the jingle, for instance), graphics, video, 
colour photographs and text. They will comment on each other's work and 
make suggestions using e-mail. The client will witness the whole creative 
process and will be able to contribute to it. There is no technological 
obstacle preventing the participation of the client's clients, as well. 
Virtual Rock'n'Roll 
It is difficult to imagine that "virtual performances will replace real 
life ones. 
The mass rock concert has its own inimitable sounds, palette and smells. 
But a virtual production of a record is on the cards and it is tens of 
percents cheaper than a normal production. Again, the participants will 
interact through the Intranet. They will swap notes, play their own 
instruments, make comments by e-mail, play together using an appropriate 
software. If one of them is grabbed by inspiration in the middle of (his) 
night, he will be able to preserve and pass on his ideas through the Net. 
The creative process will be aided by novel applications which enable the 
simultaneous transfer of sound over the Net. The processes which are 
already digitized (the mix, for one) will pose no problem to a digitized 
medium. Other applications will let the users listen to the final versions 
and even ask the public for his preview opinion. 
Thus, even creative processes which are perceived as demanding human 
presence - will no longer do so with the advent of the Net. 
Perhaps it is easier to understand a Virtual Law Firm or Virtual 
Accountants Office. 
In the extreme, such a firm will not have physical offices, at all. The 
only address will be an e-mail address. Dozens of lawyers from all over the 
world with hundreds of specialities will be partners in such an office. 
Such an office will be truly multinational and multidisciplinary. It will 
be fast and effective because its members will electronically swap 
information (precedents, decrees, laws, opinions, research and plain ideas 
or professional experience). 
It will be able to service clients in every corner of the globe. It will 
involve the transfer of audio files (NetPhones), text, graphics and video 
(crucial in certain types of litigation). Today, such information is sent 
by post and messenger services. Whenever different types of information are 
to be analysed - a physical meeting is a must. Otherwise, each type of 
information has to be transferred separately, using unique equipment for 
each one. 
Simultaneity and interactivity - this will be the name of the game in the 
Internet. The professional term is "Coopetition" (cooperation between 
potential competitors, using the Internet). 
Other possibilities: a virtual production of a movie, a virtual research 
and development team, a virtual sales force. The harbingers of the virtual 
university, the virtual classroom and the virtual (or distance) medical 
centre are here. 
The Internet - Mother of all Media 
The Internet is the technological solution to the mythological "home 
entertainment centre" debate. 
It is almost universally agreed that, in the future, a typical home will 
have one apparatus which will give it access to all types of information. 
Even the most daring did not talk about simultaneous access to all the 
types of information or about full interactivity. 
The Internet will offer exactly this: access to every conceivable type of 
information simultaneously , the ability to process them at the same time 
and full interactivity. The future image of this home centre is fairly 
clear - it is the timing that is not. It is all dependent on the 
availability of a wide (information) band - through which it will be 
possible to transfer big amounts of data at high speeds, using the same 
communications line. Fast modems were coupled with optic fibres and with 
faulty planning and vision of future needs. The cable television industry, 
for instance, is totally technologically unprepared for the age of 
interactivity. This is only partly the result of unwise, restrictive, 
legislation which prohibits data vendors from stepping on each others' 
toes. Phone companies were not permitted to provide Internet services or to 
transfer video through their wires - and cable companies were not allowed 
to transmit phone calls. 
It is a question of time until these fossilized remains are removed by the 
almighty hand of the market. When this happens, the home centre is likely 
to look like this: 
A central computer attached to a big screen divided to windows. Television 
is broadcast on one window. A software application is running on another. 
This could be an application connected to the television program (deriving 
data from it, recording it, collating it with pertinent data it picks out 
of databases). It could be an independent application (a computer game). 
Updates from the New York Stock exchange flash at the corner of the screen 
and an icon blinks to signal the occurrence of a significant economic 
event. 
A click of the mouse (?) and the news flash is converted to a voice 
message. Another click and your broker is on the InternetPhone (possibly 
seen in a third window on the screen). You talk, you send him a fax 
containing instructions and you compare notes. The fax was printed on a 
word processing application which opened up in yet another window. 
Many believe that communication with the future generation of computers 
will be voice communication. This is difficult to believe. It is weird to 
talk to a machine (especially in the presence of other humans). We are 
seriously inhibited this way. Moreover, voice will interrupt other people's 
work or pleasure. It is also close to impossible to develop an efficient 
voice recognition software. Not to mention mishaps such as accidental 
activation. 
The Friendly Internet 
The Internet will not escape the processes experienced by all other media. 
It will become easy to operate, user-friendly, in professional parlance. 
It requires too much specialized information. It is not accessible to those 
who lack basic hardware and (Windows) software concepts. 
Alas, most of the population falls into the latter category. Only 30 
million "Windows" operating systems were sold worldwide at the end of 1996. 
Even if this constitutes 20% of all the copies (the rest being pirated 
versions) - it still represents less than 3% of the population of the 
world. And this, needless to say, is the world's most popular software 
(following the DOS operating system). 
The Internet must rely on something completely different. It must have 
sophisticated, transparent-to-the-user search engines to guide to the 
cavernous chaotic libraries which will typify it. The search engines must 
include complex decision making algorithms. They must understand common 
languages and respond in mundane speech. They will be efficient and 
incredibly fast because they will form their own search strategy 
(supplanting the user's faulty use of syntax). 
These engines, replete with smart agents will refer the user to additional 
data, to cultural products which reflect the user's history of preferences 
(or pronounced preferences expressed in answers to feedback 
questionnaires). All the decisions and activities of the user will be 
stored in the memory of his search engine and assist it in designing its 
decision making trees. The engine will become an electronic friend, advise 
the user, even on professional matters. 
Cease-Fire 
The cessation of hostilities between the Internet and some off-the-shelf 
software applications heralds the commencement of the integration between 
the desktop computer and the Net. This is a small step for the user - and a 
big one for humanity. The animosity which prevailed until recently between 
the UNIX systems and the HTML language and between most of the standard 
applications (headed by the Word Processors) - has officially ended with 
the introduction of Office 97 which incorporates full HTML capabilities. 
With the Office 2000 products, the distinctions between a web computing 
environment and a PC computing one - have all but vanished. Browsers can 
replace operating systems, word processors can browse, download and upload 
- the PC has finally been entirely absorbed by its offspring, the internet. 
The Portable Document Format (PDF) enables the user to work the Internet 
off-line. In other words: text files will be loaded to word processors and 
edited off-line. The same applies to other types of files (audio, video). 
Downloading time will be speeded up (today, it takes so long to download an 
audio or video file that, many times, it is impracticable). 
This is not a trivial matter. The ability to switch between on-line and 
off-line states and to continue the work, uninterrupted - this ability 
means the integration of the PC in the Internet. 
There are two competing views concerning the future of computer hardware 
and both of them acknowledge the importance of the Internet. 
Bill Gates - Microsoft's legendary boss - says that the PC will continue to 
advance and strengthen its processing and computing powers. The Internet 
will be just another tool available through telecommunications, rather than 
through the ownership of hard copies of software and data. The Internet is 
perceived to be a tremendous external database, available for processing by 
tomorrow's desktops. This view is lately being gradually reversed in view 
of the incredible vitality and powers of the Internet. 
Gates is converging on the worldview held by Sun Microsystems. 
The future desktop will be a terminal, albeit powerful and with 
considerable processing, computing and communications capabilities. The 
name of the game will be the Internet itself. The terminal will access 
Internet databases (containing raw or processed data) and satisfy its 
information needs. 
This terminal - equipped with languages the likes of Java - will get into 
libraries of software applications. It will make use of components of 
different applications as the needs arise. When finished using the 
component, the terminal will "return" it to the virtual "shelf" until the 
next time it is needed. 
This will minimize memory resources in the desktop. 
The truth, as always, is probably somewhere in the middle. 
Tomorrow's computer will be a home entertainment centre. No consumer will 
accept total dependence on telecommunications and on the Net. They will all 
ask for processing and computing powers at their fingertips, a-la Bill 
Gates. 
But tomorrow's computer will also function as a terminal, when needed: when 
data retrieving or even when using NON standard software applications. Why 
purchase rarely used, expensive applications - when they are available, for 
a fraction of the cost, on the Net? 
In other words: no consumer will subjugate his frequent word processing 
needs to the whims of the local phone company, or to those of the site 
operator. That is why every desktop is still likely to be include a hard 
(or optical)-disk-resident word processing software. But very few will by 
CAD-CAM, animation, graphics, or publishing software which they are likely 
to use infrequently. Instead, they will access these applications, which 
will be resident in the Net, use those parts that are needed. This is usage 
tailored to the client's needs. This is also the integration of a desktop 
(not of a terminal) with the Net. 
Decentralized Lack of Planning 
The course adopted by content creators (producers) in the last few years 
proves the maxim that it is easy to repeat mistakes and difficult to derive 
lessons from them. Content producers are constantly buying channels to 
transfer their contents. This is a mistake. A careful study of the history 
of successful media (e.g., television) points to a clear pattern: 
Content producers do not grant life-long exclusivity to any single channel. 
Especially not by buying into it. They prefer to contract for a limited 
time with content providers (their broadcast channels). They work with all 
of them, sometimes simultaneously. 
In the future, the same content will be sold on different sites or 
networks, at different times. Sometimes it will be found with a provider 
which is a combination of cable TV company and phone company - at other 
times, it will be found with a provider with expertise in computer 
networks. Much content will be created locally and distributed globally - 
and vice versa. The repackaging of branded contents will be the name of the 
game in both the media firms and the firms which control contents 
distribution (=the channels). 
No exclusivity pact will survive. Networks such as CompuServe are doomed 
and have been doomed since 1993. The approach of decentralized access, 
through numerous channels, to the same information - will prevail. 
The Transparent Language 
The Internet will become the next battlefield between have countries and 
have-not countries. It will be a cultural war zone (English against French, 
Japanese, Chinese, Russian and Spanish). It will be politically charged: 
those wishing to restrict the freedom of speech (authoritarian and 
dictatorial regimes, governments, conservative politicians) against pro-
speechers. It will become a new arena of warfare and an integral part of 
actual wars. 
Different peer groups, educational and income social-economic strata, 
ethnic, sexual preference groups - will all fight in the eternal fields of 
the Internet. 
Yet, two developments are likely to pacify the scene: 
Automatic translation applications (like Accent and the Alta Vista 
translation engines) will make every bit of information accessible to all. 
The lingual (and, by extension ethnic or national) source of the 
information will be disguised. A feeling of a global village will permeate 
the medium. Being ignorant of the English language will no longer hinder 
one's access to the Net. Equal opportunities. 
The second trend will be the new classification methods of contents on the 
Net together with the availability of chips intended to filter offensive 
information. Obscene material will not be available to tender souls. anti-
Semitic sites will be blocked to Jews and communists will be spared Evil 
Empire speeches. Filtering will be usually done using extensive and 
adaptable lists of keywords or key phrases. 
This will lead to the formation of cultural Internet Ghettos - but it will 
also considerably reduce tensions and largely derail populist legislative 
efforts aimed at curbing or censoring free speech. 
Public Internet - Private Internet 
The day is not far when every user will be able to define his areas of 
interest, order of priorities, preferences and tastes. Special applications 
will scour the Net for him and retrieve the material befitting his 
requirements. This material will be organized in any manner prescribed. 
A private newspaper comes to mind. It will have a circulation of one copy - 
the user's. It will borrow its contents from a few hundreds of databases 
and electronic versions of newspapers on the Net. Its headlines will 
reflect the main areas of interest of its sole subscriber. The private 
paper will contain hyperlinks to other sites in the Internet: to reference 
material, to additional information on the same subject. It will contain 
text, but also graphics, audio, video and photographs. It will be 
interactive and editable with the push of a button. 
Another idea: the intelligent archive. 
The user will accumulate information, derived from a variety of sources in 
an archive maintained for him on the Net. It will not be a classical "dead" 
archive. It will be active. A special application will search the Net daily 
and update the archive. It will contain hyperlinks to sites, to additional 
information on the Net and to alternative sources of information. It will 
have a "History" function which will teach the archive about the 
preferences and priorities of the user. 
The software will recommend new sites to him and subjects similar to his 
history. It will alert him to movies, TV shows and new musical releases - 
all within his cultural sphere. If convinced to purchase - the software 
will order the wares from the Net. It will then let him listen to the 
music, see the movie, or read the text. 
The internet will become a place of unceasing stimuli, of internal order 
and organization and of friendliness in the sense of personally rewarding 
acquaintance. Such an archive will be a veritable friend. It will alert the 
user to interesting news, leave messages and food for thought in his e-mail 
(or v-mail). It will send the user a fax if not responded to within a 
reasonable time. It will issue reports every morning. 
This, naturally, is only a private case of the archival potential of the 
Net. 
A network connecting more than 16.3 million computers (end 1996) is also 
the biggest collective memory effort in history after the Library of 
Alexandria. The Internet possesses the combined power of all its 
constituents. Search engines are, therefore, bound to be replaced by 
intelligent archives which will form universal archives, which will store 
all the paths to the results of searches plus millions of recommended 
searches. 
Compare this to a newspaper: it is much easier to store back issues of a 
paper in the Internet than physically. Obviously, it is much easier to 
search and the amortization of such a copy is annulled. Such an archive 
will let the user search by word, by key phrase, by contents, search the 
bibliography and hop to other parts of the archive or to other territories 
in the Internet using hyperlinks. 
Money, Again 
We have already mentioned SET, the safety standard. This will facilitate 
credit card transactions over the Net. These are safe transactions even 
today - but there an ingrained interest to say otherwise. Newspapers are 
afraid that advertising budgets will migrate to the Web. Television 
harbours the same fears. More commerce on the Net - means more advertising 
dollars diverted from established media. Too many feel unhappy when 
confronted with this inevitability. They spread lies which feed off the 
ignorance about how safe paying with credit cards on the Net is. Safety 
standards will terminate this propaganda and transform the Internet into a 
commercial medium. 
Users will be able to buy and sell goods and services on the Net and get 
them by post. Certain things will be directly downloaded (software, e-
books). Many banking transactions and EDI operations will be conducted 
through bank-clients intranets. All stock and commodity exchanges will be 
accessible and the role of brokers will be minimized. Foreign exchange will 
be easily tradable and transferable. Initial Public Offerings of shares, 
day trading of stocks and other activities traditionally connected with 
physical ("pit") capital markets will become a predominant feature of the 
internet. The day is not far that the likes of Merill Lynch will be 
offering full services (including advisory services) through the internet. 
The first steps towards electronic trading of shares (with discounted fees) 
have already been taken in mid 1999. Home banking, private newspapers, 
subscriptions to cultural events, tourism packages and airline tickets - 
are all candidates for Net-Trading. 
The Internet is here to stay. 
Commercially, it would be an extreme strategic error to ignore it. A lot of 
money will flow through it. A lot more people will be connected to it. A 
lot of information will be stored on it. 
It is worth being there. 
Tel-Aviv, 4/96.  
Partially Revised: 7/00. 
 
Appendix - Ethics and the Internet 
  
The "Internet" is a very misleading term. It's like saying "print". 
Professional articles are "print" - and so are the sleaziest porno 
brochures. 
So, first, I think it would be useful to make a distinction between two 
broad categories: 
Content-related 
or 
Content-driven and Interaction-driven 
Most content driven sites maintain reasonable ethical standards, roughly 
comparable to the "real" or "non-virtual" media. This is because many of 
these sites were established by businesses with a "real" dimension to start 
with (Walt Disney, The Economist, etc.). These sites (at least the 
institutional ones) maintain standards of privacy, veracity, cross-checking 
of information, etc. 
Personal home pages would be a sub-category of content-driven sites. These 
cannot be seriously considered "media". They are representatives of the new 
phenomenon of extreme narrowcasting. They do not adhere to any ethical 
standards, with the exception of those upheld by their owners'. 
The interaction orientated sites and activities can, in turn, be divided to 
E-commerce sites (such as Amazon) which adhere to commercial law and to 
commercial ethics and to interactive sites. 
The latter - discussion lists, mailing lists and so on - are a hotbed of 
unethical, verbally aggressive, hostile behaviour. A special vocabulary 
developed to discuss these phenomena ("flaming", "mail bombing" etc.). 
To summarize: 
Where the aim is to provide consumers with another venue for the 
dissemination of information or to sell products or services to them the 
standards of ethics maintained reflect those upheld outside the realm of 
the internet. Additionally, codified morals, the commercial law is adhered 
to. 
Where the aim is interaction or the dissemination of the personal opinions 
and views of site-owners - ethical standards are in the process of 
becoming. A rough set of guidelines coalesced into the "netiquette". It is 
a set of rules of peaceful co-existence intended to prevent flame wars and 
the eruption of interpersonal verbal abuse. Since it lacks effective means 
of enforcement - it is very often violated and constitutes an expression of 
goodwill, rather than an obliging code. 


The Internet in the Countries in Transition
By: Sam Vaknin

Though the countries in transition are far from being an homogeneous lot, 
there are a few denominators common to their Internet experience hitherto: 
1. Internet Invasion 
The penetration of the Internet in the countries in transition varies from 
country to country - but is still very low even by European standards, not 
to mention by American ones. This had to do with the lack of 
infrastructure, the prohibitive cost of services, an extortionist pricing 
structure, computer illiteracy and luddism (computer phobia). Societies in 
the countries in transition are inert (and most of them, conservative or 
traditionalist) - following years of central mis-planning. The Internet 
(and computers) are perceived by many as threatening - mainly because they 
are part of a technological upheaval which makes people redundant. 
2. The Rumour Mill 
All manner of instant messaging - mainly the earlier versions of IRC - 
played an important role in enhancing social cohesion and exchanging 
uncensored information. As in other parts of the world - the Internet was 
first used to communicate: IRC, MIRC e-mail and e-mail fora were - and to a 
large extent, are - all the rage. 
The IRC was (and is) used mainly to exchange political views and news and 
to engage in inter-personal interactions. The media in countries in 
transition is notoriously unreliable. Decades of official indoctrination 
and propaganda left people reading between (real or imaginary) lines. 
Rumours and gossip always substituted for news and the Internet was well 
suited to become a prime channel of dissemination of conspiracy theories, 
malicious libel, hearsay and eyewitness accounts. Instant messaging 
services also led to an increase in the number (though not necessarily in 
the quality) of interactions between the users - from dating to the 
provision of services, the Internet was enthusiastically adopted by a 
generation of alienated youth, isolated from the world by official doctrine 
and from each other by paranoia fostered by the political regime. The 
Internet exposed its users to the west, to other models of existence where 
trust and collaboration play a major role. It increase the quantity of 
interaction between them. It fostered a sense of identity and community. 
The Internet is not ubiquitous in the countries in transition and, 
therefore, its impact is very limited. It had no discernible effect on how 
governments work in this region. Even in the USA it is just starting to 
effect political processes and be integrated in them. 
The Internet encouraged entrepreneurship and aspirations of social 
mobility. Very much like mobile telephony - which allowed the countries in 
transition to skip massive investments in outdated technologies - the 
Internet was perceived to be a shortcut to prosperity. Its decentralized 
channels of distribution, global penetration, "rags to riches" ethos and 
dizzying rate of innovation - attracted the young and creative. Many 
decided to become software developers and establish local version of 
"Silicon Valley" or the flourishing software industry in India. Anti virus 
software was developed in Russia, web design services in former Yugoslavia, 
e-media in the Czech Republic and so on. But this is the reserve of a 
minuscule part of society. E-commerce, for instance, is a long way off 
(though m-commerce might be sooner in countries like the Czech Republic or 
the Baltic). 
E-commerce is the natural culmination of a process. You need to have a rich 
computer infrastructure, a functioning telecommunications network, cheap 
access to the Internet, computer literacy, inability to postpone 
gratification, a philosophy of consumerism and, finally, a modicum of trust 
between the players in the economy. The countries in transition lack all of 
the above. Most of them are not even aware that the Internet exists and 
what it can do for them. Penetration rates, number of computers per 
household, number of phone lines per household, the reliability of the 
telecommunications infrastructure and the number of Internet users at home 
(and at work)- are all dismally low. On the other hand, the cost of 
accessing the net is still prohibitively high. It would be a wild 
exaggeration to call the budding Internet enterprises in the countries in 
transition - "industries". There are isolated cases of success, that's all. 
They sprang in response to local demand, expanded internationally on rare 
occasions and, on the whole remained pretty confined to their locale. There 
was no agreement between countries and entrepreneurs who will develop what. 
It was purely haphazard. 
3. The Great Equalizer 
Very early on, the denizens of the countries in transition have caught on 
to the "great equalizer" effects of the Net. They used it to vent their 
frustrations and aggression, to conduct cyber-warfare, to unleash an 
explosion of visual creativity and to engage in deconstructive discourse. 
By great equalizer - I meant equalizer with the rich, developed countries. 
See the article I quoted above. The citizens of the countries in transition 
are frustrated by their inability to catch up with the affluence and 
prosperity of the West. They feel inferior, neglected, looked down upon, 
dictated to and, in general, put down. The Internet is perceived as 
something which can restore the balance. Only, of course, it cannot. It is 
still a rich people's medium. President Clinton points out the Digital 
Divide within America - such a divide exists to a much larger extent and 
with more venomous effects between the developed and developing world. the 
Internet has done nothing to bridge this gap - on the contrary: It enhanced 
the productivity and economic growth (this is known as "The New Economy") 
of rich countries (mainly the States) and left the have-nots in the dust. 
4. Intellectual Property 
The concept of intellectual property - foreign to the global Internet 
culture to start with - became an emblem of Western hegemony and 
monopolistic practices. Violating copyright, software piracy and hacking 
became both status symbols and a political declaration of sorts. But the 
rapid dissemination of programs and information (for instance, illicit 
copies of reference works) served to level the playing field. 
Piracy of material is quite prevalent in the countries in transition. The 
countries in transition are the second capital of piracy (after Asia). 
Software, films, even books - are copied and distributed quite freely and 
openly. There are street vendors who deal in the counterfeit products - but 
most of it is sold through stores and OEMs. 
I think that intellectual property will go the way the pharmaceutical 
industry did: Instead of fighting windmills - owners and distributors of 
intellectual property will join the trend. They are likely to team up with 
sponsors which will subsidize the price of intellectual property in order 
to make it affordable to the denizens of poor countries. Such sponsors 
could be either multi-lateral institutions (such as the World Bank) - or 
charities and donors. 


The Selfish Net  The Semantic Web
By: Sam Vaknin
A decade after the invention of the World Wide Web, Tim Berners-Lee is 
promoting the "Semantic Web". The Internet hitherto is a repository of 
digital content. It has a rudimentary inventory system and very crude data 
location services. As a sad result, most of the content is invisible and 
inaccessible. Moreover, the Internet manipulates strings of symbols, not 
logical or semantic propositions. In other words, the Net compares values 
but does not know the meaning of the values it thus manipulates. It is 
unable to interpret strings, to infer new facts, to deduce, induce, derive, 
or otherwise comprehend what it is doing. In short, it does not understand 
language. Run an ambiguous term by any search engine and these shortcomings 
become painfully evident. This lack of understanding of the semantic 
foundations of its raw material (data, information) prevent applications 
and databases from sharing resources and feeding each other. The Internet 
is discrete, not continuous. It resembles an archipelago, with users 
hopping from island to island in a frantic search for relevancy.
Even visionaries like Berners-Lee do not contemplate an "intelligent Web". 
They are simply proposing to let users, content creators,  and web 
developers assign descriptive meta-tags ("name of hotel") to fields, or to 
strings of symbols ("Hilton"). These meta-tags (arranged in semantic and 
relational "ontologies" - lists of metatags, their meanings and how they 
relate to each other) will be read by various applications and allow them 
to process the associated strings of symbols correctly (place the word 
"Hilton" in your address book under "hotels"). This will make information 
retrieval more efficient and reliable and the information retrieved is 
bound to be more relevant and amenable to higher level processing 
(statistics, the development of heuristic rules, etc.). The shift is from 
HTML (whose tags are concerned with visual appearances and content 
indexing) to languages such as the DARPA Agent Markup Language, OIL 
(Ontology Inference Layer or Ontology Interchange Language), or even XML 
(whose tags are concerned with content taxonomy, document structure, and 
semantics). This would bring the Internet closer to the classic library 
card catalogue.
Even in its current, pre-semantic, hyperlink-dependent, phase, the Internet 
brings to mind Richard Dawkins' seminal work "The Selfish Gene" (OUP, 
1976). This would be doubly true for the Semantic Web.
Dawkins suggested to generalize the principle of natural selection to a law 
of the survival of the stable. "A stable thing is a collection of atoms 
which is permanent enough or common enough to deserve a name". He then 
proceeded to describe the emergence of "Replicators" - molecules which 
created copies of themselves. The Replicators that survived in the 
competition for scarce raw materials were characterized by high longevity, 
fecundity, and copying-fidelity. Replicators (now known as "genes") 
constructed "survival machines" (organisms) to shield them from the 
vagaries of an ever-harsher environment.
This is very reminiscent of the Internet. The "stable things" are HTML 
coded web pages. They are replicators - they create copies of themselves 
every time their "web address" (URL) is clicked. The HTML coding of a web 
page can be thought of as "genetic material". It contains all the 
information needed to reproduce the page. And, exactly as in nature, the 
higher the longevity, fecundity (measured in links to the web page from 
other web sites), and copying-fidelity of the HTML code - the higher its 
chances to survive (as a web page).
Replicator molecules (DNA) and replicator HTML have one thing in common - 
they are both packaged information. In the appropriate context (the right 
biochemical "soup" in the case of DNA, the right software application in 
the case of HTML code) - this information generates a "survival machine" 
(organism, or a web page). 
The Semantic Web will only increase the longevity, fecundity, and copying-
fidelity or the underlying code (in this case, OIL or XML instead of HTML). 
By facilitating many more interactions with many other web pages and 
databases - the underlying "replicator" code will ensure the "survival" of 
"its" web page (=its survival machine). In this analogy, the web page's 
"DNA" (its OIL or XML code) contains "single genes" (semantic meta-tags). 
The whole process of life is the unfolding of a kind of Semantic Web.
In a prophetic paragraph, Dawkins described the Internet:
"The first thing to grasp about a modern replicator is that it is highly 
gregarious. A survival machine is a vehicle containing not just one gene 
but many thousands. The manufacture of a body is a cooperative venture of 
such intricacy that it is almost impossible to disentangle the contribution 
of one gene from that of another. A given gene will have many different 
effects on quite different parts of the body. A given part of the body will 
be influenced by many genes and the effect of any one gene depends on 
interaction with many others...In terms of the analogy, any given page of 
the plans makes reference to many different parts of the building; and each 
page makes sense only in terms of cross-reference to numerous other pages"
What Dawkins neglected in his important work is the concept of the Network. 
People congregate in cities, mate, and reproduce, thus providing genes with 
new "survival machines". But Dawkins himself suggested that the new 
Replicator is the "meme" - an idea, belief, technique, technology, work of 
art, or bit of information. Memes use human brains as "survival machines" 
and they hop from brain to brain and across time and space 
("communications") in the process of cultural (as distinct from biological) 
evolution. The Internet is a latter day meme-hopping playground. But, more 
importantly, it is a Network. Genes move from one container to another 
through a linear, serial, tedious process which involves prolonged periods 
of one on one gene shuffling ("sex") and gestation. Memes use networks. 
Their propagation is, therefore, parallel, fast, and all-pervasive. The 
Internet is a manifestation of the growing predominance of memes over 
genes. And the Semantic Web may be to the Internet what Artificial 
Intelligence is to classic computing. We may be on the threshold of a self-
aware Web.


