This document is best viewed with Microsoft IE 4.

The Emerging Technology Scene: Giga's Showcase for IT Innovation

Presented by Giga Information Group, December 7-9, 1998, La Quinta, California.

IT Innovation was the theme of Giga's Emerging Technology Conference. Futurist Paul Saffo gave the keynote address, focusing on long-term innovation. Most of the remainder of the conference was geared to current innovation with analysis of industry trends and showcasing of new products.

Giga analyst Rob Enderle opening the conference. Giga analyst Rob Enderle opened the conference, which was organized around five themes or "TechnoVistas:" The Amazing Future of the Desktop, Behind the Scene at the Web Site, Serving up Servers, The Network of the Well-Connected Enterprise, and The Boomer’s Kids – What’s Coming Next. The sessions were fast-paced and the format innovative.
Each session began with an overview presentation by a Giga analyst. This was followed by product presentations. Giga invited vendors to submit proposals, and selected five innovative products for presentation in each TechnoVista session. The company with the product the analysts liked best was also invited to give a "scene setting" session keynote. The Scene Setter was followed by 8-minute demonstrations of each of the five products. Next came open discussion between the analyst leading the session, a challenging analyst (a gentle curmudgeon) from Giga, a panel of executives from user organizations, and questions and comments from the audience. There was also a wireless polling system, which the audience used to evaluate the demonstrated products and to answer questions about their organizations and practice.

We will present a summary of the Conference keynote address, the five TechnoVista sessions, and other conference activities which included Executive Exchanges, Investor Insight sessions, product exhibits, and audience polls.

Beyond Digital: What Does the Future of Technology Hold?

Conference keynote speaker, Paul Saffo.
  • Paul Saffo, Director, The Institute for the Future

In his conference keynote, Paul Saffo spoke of innovation in sensor technology, and predicted a coming era of sensor-equipped "smartifacts." Dr. Saffo sees key inventions as leading to information processing eras characterized by the widespread availability of important artifacts symbolizing the decade:

Era Invention Artifact
processing (1980s) micro-processor personal computer
access (1990s) laser Web, Internet
interaction (2000s) sensor smartifact
Dr. Saffo speaks of information processing eras, but eras may also be characterized by predominant conceptual paradigms. The idea of decentralized, "out of control" systems, inspired by chaos theory, the productivity of capitalistic economies, the growth of the Internet, etc., is characteristic of our time. The invention of the microprocessor lead to the processing era and the invention of the laser to the access era. (Of course there is a significant time lag for engineering and manufacturing improvement between an invention and the start of its era). He predicts the next era will be one in which information processing devices equipped with low-cost sensors interact with the external world on our behalf. Note that the eras are cumulative -- smartifacts will use networks which use personal computers. The new generation increases the value of and demand for the prior products.

A suite of technologies will fuel the rise of sensors. These building blocks include micro-electronic mechanical systems (MEMS), piezo materials, VLSI video and others like micro-impulse radar. These are not new inventions, but their engineering has improved to the point where they are poised for low-cost mass production and ubiquitous application.

In producing MEMS we use semiconductor manufacturing techniques to produce miniature devices capable of measuring acceleration, temperature, pressure, fluid flow, and so forth, and causing some action based on that input. Dr. Saffo used the example the accelerometers that detect auto collisions and trigger the opening of airbags. Today's airbags are attractive targets of auto thieves because they are costly, but MEMS-equipped airbags will be too cheap to bother stealing. They will also adjust the force of inflation as a function of passenger size and weight. MEMS will be used in an unpredictable variety of applications like sensing nerve gas on battlefields or in building optical switching into communication networks.

Piezo materials give off an electrical charge when flexed and, conversely, flex in the presence of an electrical field. As such they can be used as both sensors and effectors, and will see application in smart surfaces, for example airplane wings.

VLSI video cameras are another important building block, providing the "eyes" for information processing machines. Today the charge-coupled device (CCD) sensor and attendant circuitry in a digital camera costs around $18. This figure will drop precipitously in the next generation as the CCD and circuitry are combined in a single device, which will then ride the curve of Moore's Law to near-zero cost. Video sensors will become throw-away devices. Dr. Saffo compared them to the small processors and batteries which are used in greeting cards that play a tune when opened. They are a throw-away device with considerable computing power by the standards of the past. Such cameras will be used in both sophisticated and mundane systems, for example, watching the coffee pot to see when someone refills it or in unmanned surveillance airplanes the size of a dollar bill.

Dr. Saffo mentioned several other emerging sensor technologies. A complete Global Positioning System on a chip will mean neither you nor your cell phone will ever be lost. Miniature micro-impulse radar devices will be used in auto collision avoidance, finding studs in walls and plastic pipe under your lawn, as a dipstick replacement for checking the oil in your car, etc.

Today we interact with our computers via a CRT or flat-panel interface, and, unless we are typing or using a mouse during a "session," the computer does not know we exist. If a meteor suddenly hit me, the cursor on my computer would go on blinking forever as it waited for me to type the next character. Dr. Saffo predicts that the relative importance of session-based computing will diminish during the era of interaction through sensors. Sensor-equipped computers will watch the world, act on our behalf, and keep us informed as to what is happening.

We can also look forward to the widespread substitution of computation for material. For example, bridges that sense their loads and make real-time adjustments to structural properties, skyscrapers that dynamically adjust to wind and other forces, or skis that sense the terrain and other variables, will require less physical material than their "dumb" predecessors.

Computation will also have to be decentralized, as the communication links between huge numbers of sensors and effectors become unmanageable and run up against power and speed-signaling constraints. This will create a demand for self-organizing systems of autonomous sense-compute-effect devices, for a large "out of control" system.

The applications Dr. Saffo discussed involve analog inputs, and analog computing may once again ascend in importance. We might revisit the work and world of analog computing pioneers from Fourier to Vanevar Bush.
We shape our tools, and they shape us. Dr. Saffo noted that children who play with toys like Talking Barney, Tamagotchi, and Furbeys are already beyond session-based computing. They will grow up taking it for granted that they will be accompanied through life by intelligent devices which help them interpret and control the world.
For more information, see Dr. Saffo's paper Sensors: The Next Wave of Infotech Innovation. Dr. Saffo recommended the book The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom by David Brin for those interested in the privacy in the future he envisions.  His keynote was followed by the first of five TechoVista sessions. 

TechnoVista I: The Amazing Future of the Desktop

  • Rob Enderle, Vice President, Giga Information Group

The first TechnoVista session looked at innovation in client technology. The opening keynote was delivered by Giga analyst Rob Enderle who feels there is a disconnect between computer users and vendors, particularly in the home. Early users of automobiles had to be competent mechanics, but that is no longer the case, and computing is undergoing a similar shift. Users want to shift from products to services and from complex computers to appliances. This reflects a maturing of the market which has spread well beyond technicians and computer specialists.

The replacement of a proprietary PBX with Centrex service is an example of the sort of simplification Mr. Enderle envisions coming to our PC desktops. He cited a case in which a manufacturer of PBXs concluded that even for them, the economies of scale, reliability, and ease of use and management of Centrex made it more attractive than a PBX.

In their homes people will want appliances like Web TVs and set-top boxes, which are upgraded over the network. They will subscribe to a service, as we do today with CATV and phones. The same trend will occur on business desktops, which are difficult to manage when their configurations are as heterogeneous as they are today.

Desktop operating system sales trends, source: Giga. Mr. Enderle predicted desktop operating system sales through 2002, and he forecasts a significant share increase for Windows CE. Note that he is using "Windows CE" generically to include literal Windows CE machines and offerings of competitors like Symbian, the new AOL/Netscape alliance, or even a Linux derivative supplied by a company like Transmeta.
Transmeta is a secretive, well-funded VLSI design company which will use IBM as a foundry. Founded by former SPARC architect Dave Ditzel, they have employed Linus Torvalds, and other visible people from MIT, Stanford and Sun. They recently applied for patents on software for on-the-fly translation of x86 code to native code for their VLIW CPU (U. S. Patent 5,832,205), so they are presumably developing a low-cost x86 replacement. Windows CE will compete with Windows 9x and 2000 for future desktops; it will not be confined to portable devices. Software upgrades will come automatically from servers on the Internet. For example, users will go to a Compaq site to get operating system upgrades with the proper drivers and tuning for Compaq machines. This will reinforce the Compaq brand, even if the software is from Microsoft. In-house desktops will often be upgraded over a company server, not a server on the Internet. Browsers will also become lighter, and users will add capability as it is needed from the network.

Mr. Enderle cited factors that will hurt Windows 2000 relative to CE. Windows 2000 will debut at a time when people are busy with year 2k problems, so they will be slow to evaluate and adopt it. Relations between Intel and Microsoft also seem to be fraying as evidenced by Intel testimony in the anti-trust trial and the cancellation of joint projects between the companies.

Mr. Enderle expects that prices of desktop machines will continue to decline through 2002. By 2002 he predicts average laptop prices below $1,500, desktops with monitors below $1,000 and CE-class machines just over $500. (Mr. Enderle predicts that flat-panel displays will enter the mainstream in 2002). He forecasts profit squeezes on the traditional desktop and laptop machines, leading to some shakeout in manufacturing and distribution, and opening a window of opportunity for CE machines. He concluded his presentation by showing prototypes of potential portable and desktop CE-class machines, including the iMac.

The analyst keynote was followed by product demonstrations. These began with a "scene setting" keynote by the vendor Giga chose as best among the demonstrating vendors. Sybase was the Scene Setter for this session.

The Sybase Scene Setter was given by David Yach, Vice President and Chief Architect for Mobile and Embedded Computing. There are three trends driving his work.

One is the proliferation of handheld machines. According to Giga, the handheld market is growing at twice the rate of the desktop market and will exceed traditional laptop sales by 2002. This means Sybase must target a small, heterogeneous form factor with limited resources. These machines need access to organizational information.

The second trend is the proliferation of ERP systems. This means mobile workers must now interact with application servers in three-tier systems as well as the databases behind them.

The third trend is the proliferation of devices with embedded processing power. For example, Mr. Yach anticipates smart vending machines interacting with ERP systems and databases.

Sybase is addressing these requirements with Adaptive Server Anywhere UltraLite deployment technology. Developers may take advantage of Sybase's established database replication and synchronization capability in small machines which are not continuously connected. UltraLite generates client modules which contain only the code needed to support the specific application. Today's tools are targeted at the technically proficient developer, but higher level tools will be forthcoming.

Mr. Yach demonstrated an order entry application using both a Palm Pilot and a RIM interactive pager. His demo was followed by four others:

UltraLite, see above, Sybase.

Briefcase, a service in which a mobile user can store personal information, email, etc. in a secure Web-accessible location on the Internet. Visto.

Employee Desktop, presents a common Web-based interface creating a home base for accessing an integrated suite of core business applications such as expense reporting. Concur Technologies.

Zinnote, a 1998 Best of Comdex winner that rapidly generates reports and graphs from disparate ODBC databases, and displays, prints, or saves them as HTML or Word files. Positive Support Review, Inc.

Embassy, a security chip and associated software for clients. A trusted client can be served by an insecure data link, and Embassy has several impressive OEM wins. Wave Systems Corporation.

The demonstrations were followed by a discussion and question and answer session with participation by a second Giga analyst, a panel of user experts, the audience and demonstration presenters. Finally, the audience was polled on three questions:

  • Which of these products was the most innovative? (I)
  • Which has the greatest business application potential? (A)
  • Which had the best presentation? (P)

The following are percents of the vote received by each product:









Employee Desktop

















Saphire/Web benchmark. The system achieved throughput of 100,000 interactions per second (blue) and average response time (green) was .93 seconds.

A 100,000 interaction per second server farm.

TechnoVista II: Behind the Scenes at the Website

  • Mike Gilpin, Vice President, Giga Information Group

Giga analyst Mike Gilpin discussed the evolution of the Web in his opening overview. During the first years, the Web was used for publishing, with a one-way flow of information between the supplier and consumer. Much of the early content was academic and government material, reflecting the origin of the Internet, and interaction was limited to navigating through static pages. Some questioned the business viability of the Net, but static commercial information came on-line rapidly. Email was also an important Internet application (and continues to be). Email and static Web pages may be handled behind the scenes by a Web server and an email server which might be integrated with a directory server.

Today we are witnessing the emergence of early e-business. We now look for interaction such as database retrieval and update and on-line shopping. On the client side this has led to innovations like Java applets, dynamic HTML, XML, and lightweight and mobile DBMS technology. Behind the scenes at the Web site there is a proliferation of special-purpose servers, for example for streaming audio and video, chat, threaded discussion, telephony and teleconferencing. At the same time a general purpose server has emerged, the Web application server. It hosts the exploding number of intranet and Internet applications, often tying them to databases and legacy applications running elsewhere on the corporate network.

The primary focus to date is on new ways to use the Internet within traditional business and business practices, but that too will change. Mr. Gilpin expects we will see a maturing of e-business in which there are fundamental changes in the way business is conducted. Many enterprises are rapidly moving to global commerce and extending their networks to link customers and suppliers around the world. Thus Internet applications are going beyond the boundaries of the organization and becoming mission-critical, placing increasingly heavy demands on the Web site. This new role will lead to increasingly complex Enterprise Application Servers, which host critical applications, and continuing innovation in client technologies to support a large, mobile workforce.

The job of building and managing mission-critical Web sites has taken a quantum leap in complexity and importance, requiring much better tools in order to improve the following:

  • reliability, availability and scalability
  • client connectivity
  • server connectivity
  • developer productivity
  • manageability

Like any critical information system, Web sites must be reliable and available around the clock (particularly in the case of global organizations). How quickly will a potential customer click to another site if yours is down? The scaling requirement is particularly important at a Web site since peak loads must be handled, and all things relating to the Internet or intranets seem to grow at an accelerating rate. There must be on-demand scalability because there is no way to predict traffic patterns and growth at this time. Even if your site is not down, customers may abandon it if it is slow.

Client connectivity refers to pushing operation deep into the organization and across organization boundaries. Traveling managers, the sales force, branch personnel, customers, suppliers, etc. will all expect to access and update data on Web sites. These users will be increasingly remote from the central organization and many will be infrequently connected and using a variety of client machines (as covered in the session on The Amazing Future of the Desktop).

Server connectivity refers to the requirement that new Web applications be able to share information with applications currently running on mainframes or minicomputers. (Someone quipped that legacy applications are harder to get rid of than cockroaches). Of course, Web applications must also communicate with legacy and new databases.

Web sites must also be manageable. Everyone recalls creating their first Web page by adding a few HTML tags to an ASCII file using a text editor. Within five minutes, they had created and published a document that could be read by people all over the world. That experience was inspiring, pointing immediately to the importance of the Web, but it was deceiving. We now expect professional quality user interfaces for even static Web sites, and developing realistic interactive applications is as difficult on the Web as in-house. We need increasingly powerful development tools and comprehensive general-purpose application servers as applications become complex.

Applications have become more numerous as well as more complex. We have progressed from our first "hello world" test pages to vast Web sites with many applications, documents, developers, versions, locations, languages, etc. Furthermore, the value of the information assets we are managing has increased dramatically and continues to rise. Such an environment can quickly get out of control without tools to organize and manage development.

Giga selected John Capobianco, Senior Vice President of Marketing at Bluestone Software, to deliver the Scene Setter keynote. He stated that an enterprise information system must integrate information processing from all parts of the organization: research and development, marketing, shipping, finance, human resources, sales, distribution, support, manufacturing, purchasing, accounting and inventory control. The information system must also serve customers on the Internet, employees on an intranet and suppliers and other partners on an extranet. Finally, heterogeneous legacy applications must be incorporated in the system.

While the Web opened our awareness of the need to extend our information systems, a Web server cannot by itself meet the demands outlined above. This has led to newer, more general and complex application servers.

Mr. Capobianco cited five market research firms that estimated the application server market as being between roughly $750 million and over $2 billion in 2002. (Giga's estimate was at the low end, but was not clear that each of the research companies defined "application server" in the same way). He sees the application server market as dividing into three ranges: enterprise class, such as outlined above, mid-range which focus on new, database-centered applications, and low-end RAD tools for relatively low volume Web applications. While there are a number of competitors in each group today, he expects the market to shake out over the next several years as it did in operating systems, databases, and ERP systems.

The fundamental unit of work in an enterprise systems is the "interaction" which he defined as everything that takes place between a user entering something and information being returned. Thus a single interaction could encompass a good deal of server-side activity. An enterprise application server must provide a controlled, secure environment for high volume interaction processing, development tools, integration with other systems, and system monitoring and management.

Bluestone's Saphire/Web is such a system. For the demonstration, Bluestone teamed with Hewlett-Packard to set up 100 machines, including clients, IIS and Netscape Web servers, Java application servers, database servers, payment and security certificate servers, and Bluestone's servers. They ran a simulation using a PC Magazine benchmark load with an 18-application mix, and the system was able to sustain 100,000 interactions per minute.

Bluestone does not sell their software; they meter and charge by the interaction. This is an example of the shift toward services predicted by Mr. Enderle. To put this in perspective, Mr. Capobianco estimated that Yahoo handles 100 million interactions daily. This was indeed an impressive demonstration, which took many around-the-clock days to set up and debug.

The following products were selected for demonstration during this session:

Saphire/Web, see above, Bluestone, Inc.

WebDB, a complete solution for building and deploying a database-centric Website, oriented toward relatively non-technical developers. Oracle.

UltraLite, This demonstration showed the Palm Pilot and RIM pager used with Sybase's Jaguar enterprise application server to integrate the Web and Quicken in making and recording a stock trade. Sybase.

Oracle8i Lite, enables laptop users to disconnect from the network and continue using browser-based Java database applications with automatic replication when they reconnect. Oracle, .

TeamSite, a development system geared for large Web sites with many contributors and multiple languages and versions. Interwoven Software.

The audience votes (percents) for most innovative, greatest business application potential and best demo were:









Web DB








Oracle8i Lite








TechnoVista III: Serving Up Servers

  • Richard Fichera, Vice President, Giga Information Group

Mr. Fichera presented platform trends and issues. He reminded the audience that platform decisions must be based on business considerations, not flashy technology. We want "sales-force management," not "HTTP." Factors like internal staff skills, legacy system requirements, minimizing risk, and flexibility outweigh technological concerns.

That being said, there are core technologies: operating systems, servers (hardware and software), storage and database management systems. These constitute the bulk of capital outlay outside of the network itself, and once chosen, one is locked in to an extent, and the limits of performance, reliability and scalability are set. Mr. Fichera pointed out that these core components are increasingly being decoupled as high-speed networks allow them to run on separate machines, often in different locations. Networks are beginning to perform more like backplanes than I/O devices.

Rich Fichera shows server performance gains. Server performance is rapidly improving, with TPM-C benchmark rates rising and cost per transaction falling exponentially. Since around 1994, performance has improved at roughly 160% per year (as opposed to 60% per year for microprocessor performance). On the other hand, we are constantly doing more complex things, and there is increasing overhead in generalized middleware, so these performance increases are being absorbed. There is also a trend toward the convergence of cluster and large single system performance.

There are five alternatives for server configuration: uniprocessor, symmetric multiprocessor, massively parallel, clustered, and fault tolerant computers. Mr. Fichera feels that for IT applications massively parallel processors have been the "technology of the future" for ten years, and will remain so. Fault-tolerant computers will remain a niche market where sub-second interruption is intolerable. Moving from single processors to multiprocessors to clusters increases scalable performance and availability. Increased availability is a major attraction of clusters. The line between multiprocessor systems and clusters is beginning to blur as multiple CPUs share switched backplanes and multiple operating system partitions run independently.

Reflecting its LAN origins, Windows NT is most appropriate for low volume applications served from one location and Unix is suitable for dispersed, high volume applications. MVS is also suitable for highly complex, centralized applications.

While NT will begin to challenge Unix at the low end, Unix (and other high-performance, reliable legacy systems) will continue to be the solution of choice at the high end, and high end applications are growing rapidly, with a compound annual growth rates of 50% for large DBMS, 85% for large Webs, and 35% for transaction-processing applications. Giga has observed roughly a 3:1 performance gap between single-system Unix and NT servers, and they expect this relationship to persist in the near future. In comparing high-end production data warehousing applications, they see solid order-of-magnitude performance differences. They expect Unix to increase already substantial advantages over NT in terms of the numbers of CPUs, memory size and aggregate I/O rates in multiprocessor configurations. They also predict that the number of nodes per Unix cluster will remain much higher than for NT. This translates into a clear advantage for Unix and legacy systems over NT for high-end applications.

NT also lags in reliability. The following table shows Giga's estimates of annual downtime and percent availability:

Mr. Fichera stated that a Unix system would have to be heroically managed to achieve 99.99% availability.


% Avail.

NT 3.5 days


Unix, VMS 8.5 hours


MVS, Unix, VMS, AS400 1 hour


MVS, VMS, fault tolerant <5 min.


Of course the results are not in for NT 2000, and Microsoft President Steve Ballmer has promised vastly improved reliability; however, Mr. Fichera remains skeptical that this complex operating system with an estimated 30 million lines of code will be reliable. He did point out, however, that a number of major vendors will be delivering products targeted at NT clustering enhancements, storage management, system management, and enhancements to Microsoft Back Office.

The Scene Setter keynote for this session was delivered by Rich Marcello, Vice President, OpenVMS, Compaq Computer Corporation. He began with an overview of the VMS market. They target high-end, non-stop systems, have a growing installed base of over 450,000 systems, and a dominant position in several vertical markets. His presentation focused on partitioning technologies:

  • Application partitioning: separate workspaces within a single OS instance.
  • Hardware partitioning: hardware-enforced access barriers among multiple CPUs and OS instances and applications in the same computer.
  • Adaptive Partitioned Multiprocessing (APMP): resources (CPUs, memory, I/O) are dynamically partitioned by software with multiple instances of the OS executing in a single computer.
An adaptively partitioned multiprocessing system can dynamically allocate CPUs, memory, and I/O resources to operating system instances. Source: Compaq. APMP systems are able to dynamically allocate resources as demand varies. For example, a system may be running three applications on three instances of the OS, each with several CPUs, some private memory and I/O resources assigned to it. These resources can be reallocated under program control as demand varied. Performance, cost, scalability, and availability are all improved relative to both single systems and clusters.

Compaq is offering APMP in their Galaxy software. Mr. Marcello described several early adopters, and cited an example in which a 39% cost saving was achieved and performance improved when switching from three boxes to a single, 10-CPU machine. He outlined the features of Galaxy Phase II (late 1999) which will support up to 32 processors, hot CPU and memory swapping, up to 8 instances of VMS, very fast "LAN" over shared memory, and RAM disk in shared memory. He concluded with a statement of Compaq's commitment to continued support for VMS and Alpha and indicated that Galaxy would eventually be available for NT and Unix as well as VMS. Compaq not only set up and dynamically

Compaq not only demonstrated Galaxy, they provided entertainment. reconfigured a 10-CPU Galaxy system, they had a dance troop entertain us.

The demonstrations in this session were:

Galaxy, see above, Compaq.

Times Ten, a high performance (10X improvement is their name and goal), memory-resident database management system. Performance Software.

WebQOS, a program which allows one to assign priority levels to different applications (based on port number, client IP address, etc.) so that important transactions can be handled immediately and less important transactions queued. Hewlett-Packard.

Think Different, a fast, low-cost, plug-and-play Web server for the department or small business. Apple.

DB Guard, continuously tracks and learns database patterns and raises an alert when deviations from normal behavior exceed user-defined thresholds. DB Guard Systems.

The audience votes (percent) for most innovative, greatest business application potential and best demo were:









Times Ten








Think Different




DB Guard




TechnoVista IV: The Network of the Well-Connected Enterprise

  • Dan Merriman, Vice President, Giga Information Group

Would you want to risk your career on the Internet with best-effort IP routing and questionable security? Dan Merriman feels you may have no choice as real-time on-line connectivity, collaboration and commerce are becoming fundamental business requirements, making the network a strategic asset. Internal communication must be effective, and key business processes like customer service, inventory, order entry, marketing and engineering must be extended to customers, partners, and suppliers.

The emerging dominance of IP in enterprise networks enables powerful connectivity, but it introduces challenges and choices at three levels:

  • Infrastructure level: choosing and combining link alternatives.
  • Application level: support for and integration of a variety of services.
  • Connectivity level: Secure, reliable connection to external sites, remote workers, customers, suppliers and others.

The goal is to leverage the power of IP networks while avoiding landmines. Different enterprise needs typically require several infrastructure level alternatives, for example, frame relay to some locations and private lines to others. Latency, setting appropriate application priority levels, scalability, and reliability must be considered at the application level. Speedy service and security -- authentication, encryption, authorization, non-repudiation -- must be provided at the connectivity level.

The importance of network management is implicit in Mr. Merriman’s speculation that the 80-20 rule of thumb no longer holds and perhaps half of the data communication load is now in the WAN, not the LAN. Network managers balance bandwidth and intelligence. Terminals in a branch office require little bandwidth and a simple connection. A workgroup LAN or an e-commerce Web site requires a lot of bandwidth, but relatively simple connections. A connection to a remote branch office may not need a lot of bandwidth, but security requirements may add complexity. A campus or corporate backbone requires both high bandwidth and complexity for security and resource allocation. Traffic patterns are very difficult to predict, but the trend is for rapid increases, and the network must scale end-to-end, avoiding weak links.

Extranets and the Internet may mean planning for hundreds of thousands of users, so security is critical. There are critical choices in the relationship between the company firewall and VPN. The security architecture must also be comprehensive. A single sign-on point with an enterprise-wide directory and public-key security are preferred to security "islands" for given applications or classes of user.

The security and performance requirements of such networks call for explicit, business-driven policies on security and resource allocation. The enterprise directory is central to large-scale network scaling. Today directory information is scattered in email systems, databases, the phone book, human resources files, etc. Directory standards, and metadirectories, which do not consolidate directories, but know what they contain will help make networks manageable.

Today we have many disparate components -- WANs, firewalls, telephony equipment and services, servers, etc. from many vendors. Mr. Merriman expects that we are now in transition to integrated, coherent networks. During this transition period he suggests that we focus on "pain points" like prioritizing applications in assigning network resources to them and integration and management of the corporate security perimeter.

There were two Scene Setter keynotes in this session, from Aventail and from Cabletron.

Evan Kaplan, President and CEO, at Aventail talked about extranets. He sees a fundamental shift from internal business process toward inter-business process. He makes four basic assumptions with respect to the extranet:

  • Increased revenue and market advantage, not cost savings drive the extranet.
  • Network security is an oxymoron, and who is inside and who is outside the organization is confusing.
  • HTTP is a stepping stone to richer resource sharing using TCP/IP applications.
  • Enterprises seek to manage partnership, not merely secure traffic.

He presented the example of ClearTrade, a financial organization with 5,000 brokers. The agents and brokers sent faxes, phone calls, letters, etc. which were handled by people at corporate headquarters. Those intermediaries used 20 discrete applications built using various technologies. There were political and practical obstacles, but all this has been moved to a successful extranet incorporating Aventail's ExtraNet Central software.

ExtraNet Central involves a transparent client and a server. The server provides an array of extranet functions, including logging and reporting, encryption, application and protocol filtering, authentication, access control, and policy management. A year after ClearTrade implemented their extranet, they had increased revenues, more customers, faster deployment of services, improved service, reduced time to clearance of trades, and improved partner retention.

Trent Waterhouse, Senior Architect at Cabletron also gave a Scene Setter keynote. He pointed out that changing business processes and application mix drive infrastructure change, which in turn feeds back by making new processes and applications feasible. The resultant rapid change and increasing reliance on communication make network management critical. His goal is networks which are like utilities: simple, reliable, scalable, meterable, mobile and manageable. This requires comprehensive network-management software.

He described and demonstrated Cabletron's Spectrum network-management software which is bundled with training, engineering service and support. He stressed that virtually all networks are heterogeneous today, and Spectrum is standards-based and works with all companies equipment and all communication technologies. (However, in Cabeltron routers, ASICs may implement some functions which are implemented in software in others). Spectrum supports management of access control, bandwidth, scheduling, quality of service, IP addresses, etc.

The products demonstrated in this session were:

ExtraNet Central, see above, Aventail.

Spectrum, see above, Cabletron.

Entrust PKI, public-key infrastructure technology combining certification authority, encryption and digital signature capabilities with fully automated key management, designed for systems with at least 5,000 users. Entrust Technologies.

What did employees downloading the Monica Lewinsky testimony cost your organization? Net Access Monitor, software to monitor, manage, and predict traffic and utilization, answering question such as who is using the network, when they use it, and which protocols they use. Sequel Technology.

GTE Internetworking, a full-service VPN offering providing global reach, single-contact management, security, service-level guarantees, transparent Internet gateways, etc. GTE.

The audience votes (percents) for most innovative, greatest business application potential and best demo were:





ExtraNet Central








Entrust PKI




Net Access Monitor




GTE VPN Services




Rob Enderle's vision of tomorrow's palmtop+video phone. Source: Giga.

For an early discussion of ubiquitous wireless computing, see Weiser, Mark, The Computer for the 21st Century, Scientific American, September, 1991, pp. 94-104.

TechnoVista V: The Boomer's Kids -- What's Coming Next?

  • Rob Enderle, Vice President, Giga Information Group

The format of this session differed from the others. Since the products being demonstrated did not fit into a category, there was no Scene Setter keynote. Instead there were five product demonstrations, preceded by a look into the future by Rob Enderle.

Yesterday's amazing concepts become today's products. Ten years ago few people would have foreseen that grandmothers would be exchanging Christmas cards and doing Christmas shopping on the Web today. The next generation, the "boomer's kids," are being born and growing up with communication and information processing technology. They take it for granted and are not afraid of it. They will also challenge authority and the status quo -- intimidation or fear of trying something new will not keep them as customers. This is not the "you can't be fired for buying IBM" generation.

Mr. Enderle took the audience through a rapid tour of the future. He began with the very-near future, AutoPC, which will ship in January. AutoPC combines a Windows CE computer, radio, CD audio/ROM, GPS receiver and more in the form factor of an in-dash auto radio. It is voice controlled, allowing hands-off operation. This product anticipates a future of ubiquitous portable devices which are networked wirelessly and do not look much like today's desktop or laptop PCs. The remainder of Mr. Enderle's slide show presented conceptual descriptions of hypothetical machines we will use in our homes, offices, and vacations.

Mr. Enderle's presentation was followed by demonstrations of five products.

Streaming-media Things, are streaming-video objects for a Microsoft media player that is modified to show a client's logo rather than Microsoft's. The objects respond to mouse-over, click-on and click-through events. the player hides the URL and the Thing is encrypted to protect intellectual property. markets this as a service. For example, Major League Soccer might pay for the right to post a particular Thing for two weeks before their all-star game.

Visible Decisions, a data reduction, analysis and visualization tool allowing users to drill down from summary to detail information, do what-if analysis, and spot trends. Visible Decisions.

Coronado, a tool for animated, 3-D visualization of data on Microsoft BackOffice servers. Reports can be saved as PowerPoint, Word, HTML files or emailed. Portola Dimensional Systems.

eTEAM, an easy-to-use collaboration tool which allows one to record comments while using virtual "markers" to point to and draw on the screen. These recorded sessions, in which the voice recording and activity on the screen are synchronized, can be emailed to collaborators. They may also be saved on disk for downloading in a training session. InfoCast.

VRI 3D Projection System, a prototype three-dimensional display in which figures appear to "float" in air. It is initially designed for applications like kiosks and point-of-sale displays, but we may one day see 3-D images dancing on our business desktops and home entertainment centers. Email to Optical Products Development Corporation.

The audience votes (percent) for most innovative, greatest business application potential and best demo were:





Streaming things




Visible Decisions












VRI Display




At the end of TechnoVista V, an award ceremony was held to recognize the program the audience selected as best over all in each session. The winners were … drum-roll …
  • Sybase: Adaptive Server Anywhere, UltraLite
  • Interwoven Software: Teamsite
  • Performance Software: Times Ten.
  • Aventail: ExtraNet Center
  • Thing Technology: Streaming-Media Things.

While these programs were the winners, the others were surely not "losers," since they had all been selected from among many candidates by Giga analysts to qualify for presentation at the conference.

These are not futuristic innovations, but engineering advances. TimesTen re-architects the database to reflect the fact that memory prices are low and falling rapidly, Aventail packages extranet-support functions in a manageable box, Sybase has an efficient port of their database technology to a low-resource, disconnected client, Interwoven brings management and order to distributed, multi-version Web development and asset management, and modifies existing technology to support a new business model. This is in no way meant to be pejorative. As they say, the "devil is in the details," and these products reflect maturation of network-based IT. It is easy to make the conceptual leap from a "hello world" home page to pervasive intranets, e-commerce, and extranets; it is quite something else to deliver an efficient, manageable implementation which is what these programs do.

Executive Exchanges, Investor Insight Sessions, and Informal Interaction

The Executive Exchanges were held in conference rooms in parallel with the theater-style TechnoVista sessions. These were small groups of around 20, led by Giga analysts. There were four Executive Exchanges corresponding to the first four TechnoVista topics.

The sessions began with the attendees introducing themselves and stating their motivation for attending the session. This was followed by a free-flowing exchanges. An attendee might raise a question of concern to his or her organization which would be answered by the analyst presenting a concept or the results of some research or the experience of a client. The and the other attendees would also share their experience. The sessions were more like group consulting (therapy?) than lectures with questions and answers.

There was some overlap between the content of the Executive Exchanges and the corresponding TechnoVista sessions, but the emphasis and style was different. For example, the Executive Exchange on servers, led by Rich Fichera, spent a lot of time on server consolidation, because that was a topic facing many of the attendees. Experiences were shared. People reported difficulty combining SQL Server and Exchange on one machine, said as many as 5 or 6 NetWare servers could be replaced by one high-end Xeon system, found that static Web pages consolidate well and different release levels of the same program do not work well together. Mr. Fichera said he was cautious about putting multiple applications on one server, and likes running multiple instances of the operating system on one system. (Perhaps he was thinking of Compaq's Galaxy).

The attendees were also concerned about balancing the need for control over IT and the payoffs operating units could often reap by rolling their own systems. They all had problems supporting and integrating isolated systems which had popped up within their organizations. Many felt this was a management problem, and the costs of support, backup, maintenance, integration, etc. had to be made explicit and recognized. The consensus seemed to be that these costs were usually absorbed without complaint because the payoff from new systems is high.

These sessions were held at the same time as the TechnoVistas, so attending them meant missing something else, but they fruitfully combined characteristics of one-on-one consulting and informal discussion with peers. Attendees were also able to schedule 30-minute one-on-one meetings with a Giga analyst.

Mr. Gilpin's overview of this market. The y-axis runs from low-level communication protocols to abstract data description to generalized application-aware servers. The x-axis shows the type of abstraction, moving from data to process-oriented abstraction to object-based abstraction. There were also Investor Insight corresponding to the first four TechnoVistas. These were small-group sessions with the analyst who followed the relevant market. For example, Mike Gilpin presented an overview of the enterprise Web software market with an emphasis on middleware.

Mr. Gilpin noted that the lower-level programs are now commodities, so the more abstract and general tools are where the opportunities are for investors. He presented estimates of the sizes of the market segments, and answered questions about many of the companies competing in this market. He also predicted the growth of utilization of Web/distributed platforms. He estimates that by 2003, 90% of in-house developers and ISPs will be running such applications, and that they will represent roughly 35% of the total application portfolio. He predicted that consolidation of this market would be about twice as fast as it had been in the client-server market, and that at the end there would be three markets (Web application servers, department application servers, and the enterprise market) with perhaps two or three suppliers each. He sees this market growth as inevitable, being driven by direct customer access, supply-chain integration, globalization, mergers and acquisitions, and other factors. He even found time to talk about middleware architecture, but stopped short of giving stock tips.

Informal interaction during a break.

Informal interaction during a break.

Lunches were served outside.

Informal interaction during a break.

Informal interaction during a break.

Attendees could follow up on demonstrations in the exhibit area.

Informal exchange is a big part of conferences like this. The attendees were mostly IT managers with common problems and experiences. The coffee breaks and lunches afforded time to exchange war stories, and the setting was very nice. Breaks, lunches and evenings also provided ample time to visit the product exhibits. Thirty vendors had small booths in the exhibit hall. Attendees could follow up on products that caught their eye during the demonstrations, and see others that had not been selected for the program.

Giga took informal exchange a step further by conducting polls of attendee practice at the ends of the TechnoVista sessions. There were 330 attendees, including press, exhibitors, etc. so roughly 100 people representing user organizations responded to each question. There were thirty one questions all together, and the poll results were very interesting.


This conference was of interest to those with large IT shops who feel an imperative to move to Web-enabled systems to integrate enterprise information processing and communicate with customers, suppliers and partners. IT executives heard informed talks on industry trends and practice from analysts and vendors. There was also ample opportunity to interact in small groups during the Executive Exchanges and Investor Insight sessions, and each attendee could schedule a half-hour one-on-one session with a Giga analyst. Interaction with peers and the electronic attendee-practice polls were also valuable. The conference could be seen as a discount consulting session.

The presentations and audience polls made it clear that, in addition to the Internet and intranets, we will see IP extranets connecting companies. It was also clear that Unix and legacy systems on mainframes and AS/400s will remain important on the server side. While Microsoft is stronger on the client side, they are also somewhat vulnerable there. Windows CE and similar systems will be increasingly important as thin desktop clients and in portable and embedded applications. The high-end tools market is still wide open, and there is the attendant risk of committing to a vendor that subsequently fails.

This audience was considering high-volume, mission-critical applications, so Unix and legacy systems took center stage. There was relatively little mention of Microsoft on the server side, but that is a bit like ignoring the proverbial elephant in the living room. Microsoft is strong in the departmental and low-end Web server market, and surely intends to compete for higher-end applications. For a more Microsoft-centric view of IP-based applications, see our review of the 1998 Microsoft Business Applications Conference.

The emphasis at this conference was on innovation, but the innovations presented were not flashy breakthroughs and concept demonstrations. They were more mundane, dealing with application, scaling, development, management, security, etc. In other words, they were tools and techniques to tame and practically implement systems based on earlier conceptual breakthroughs. That being said, one is struck by the rapidity with which IP-based applications are growing in complexity and ambition. I have the feeling there will be plenty of pioneers with arrows in their backs.

Disclaimer: The views and opinions expressed on unofficial pages of California State University, Dominguez Hills faculty, staff or students are strictly those of the page authors. The content of these pages has not been reviewed or approved by California State University, Dominguez Hills.