Special Report: Siggraph 2000
SIGGRAPH (hereafter: Siggraph: http://www.siggraph.org) is the Special Interest Group for Graphics of the venerable professional organization Association for Computing Machinery. It's also the name of the group's annual trade show for computer-graphics (CG) fans of all persuasions, from grizzled veteran hackers who cut their eyeteeth on computer punch cards to starry-eyed students. But Siggraph isn't just about CG: It also concerns itself extensively with computer-human interaction. This point was resoundingly brought home by Raymond Kurzweil, the keynote speaker at this year's event, held last week in New Orleans.
Kurzweil, an inventor, artificial-intelligence expert, and all-around visionary, titled his talk "The Human-Machine Merger: Why We Will Spend Most of Our Time in Virtual Reality in the 21st Century." He predicted a day, not extremely far in the future, when computers and humans will merge seamlessly. You might think this sounds rather far-fetched, but Kurzweil made a convincing case by presenting statistics that seemed to indicate that the pace of technical change has been accelerating, doubling the rate of progress every decade, and will continue to do so.
The inevitable result of the ongoing trend toward miniaturization will result in nanotechnology, he says, and we'll probably have self-replicating nanotechnologies in 25 years, not 100, as some predict. But before that, possibly in less than 10 years from now, we'll have tactile-based virtual reality. In discussing VR, he made the point that, despite those who claim that VR participants act irresponsibly, the technology is analogous to the telephone, and that people who make verbal commitments by phone generally meet them.
According to Kurzweil, the next major advance in digital technology will be three-dimensional computing, which works like the brain, claiming he's seen prototypes of such machines with 300 layers of circuitry. He predicted that a one-inch nanocube, potentially feasible by the year 2030, would be 1,000 times more powerful than the human brain. He then went on to talk about brain scanning via nanotech, which will ultimately allow us to reverse-engineer the brain, creating/re-creating the types of processes that actually occur in our most important organ. Thus, machine intelligence won't be artificial, but derived from human intelligence, and will have a natural, human feel. (Are you there, Hal?)
From these advances we'll have access to virtual reality that connects directly to our brains, courtesy of nanobot implants. This will encompass all of our senses, and even extend to emotions. Of course, the highly optimistic talk didn't touch on how a futuristic Hitler could capitalize on such access to the brains of millions or billions, but no doubt security will be foolproof by then (ahem).
The talk, based (I think) on Kurzweil's recent book, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, was well received by the 25,000+ show attendees, especially when he concluded by making a firm connection between his vision and his audience. His final predictions were that VR would be a graphics challenge, and that graphics would be 50 percent of computing.
The topic of virtual reality was also covered on a more mundane level at Siggraph in the form of various panels and presentations. A panel titled The Actual Reality of Virtual Reality Software was moderated by the irrepressible Linda Jacobsen of SGI, who likened VR software to Rodney Dangerfield: both have a cult following, neither became stars until middle age, and neither gets any respect. Panelist Ken Pimentel of Engineering Animation, Inc. bemoaned VR's lack of user-interface standards, concluding that only application experts can derive the value of the visualization experience, and that these experts can't use their knowledge on the next immersive application. Kent Watsen of the Naval Undergraduate School talked about general-purpose toolkits for creating virtual environments (VE), saying that "Because these toolkits have different architectural implementations, they have fragmented the VE community." On a more optimistic note, he predicted that "component-oriented programming, a recent trend in software engineering focused on establishing the 'standards for interoperability,' may be the proverbial 'silver bullet' (Brooks, 1987) the VE community so desperately needs."
Of course, once we've built our virtual realities, there remains the problem of what to do with them. Many believe that the ability to participate in interactive storytelling holds the answer. A special session titled Fiction 2001, somewhat overpopulated with nine panelists, held the sizable audience's attention with imaginings of the future of fiction in the emerging realms of networked computers. Novelist/actor/director (and Microsoft researcher) Andrew Glassner supposed that interactive fiction is a problem, because, while most stories are about conflict, most people are conflict averse. What's more, he said, improvisation is very difficult. He offers the theater piece "Tony and Tina's Wedding as an example of the ultimate form of interactive storytelling because audience members can say anything they want to the actors, and real, trained people respond.
Also in Fiction 2001, author Espen Aarseth made the case for games as story. Jesse Schell of Walt Disney Imagineering reinforced this notion with his emphatic belief that "The videogame is the most exciting medium of our time." AI researcher Phoebe Sengers (sp?) predicted a battle of ideas between corporate retellers of "tired twentieth-century narratives" and insurgent writers and artists who will create "atomized, decentralized, emergent post-narratives, elegant in construction, intellectually dense, extremely hip, but not exactly a pleasure to read." Her hopes are for a third alternative, "technology that supports our need and desire to tell [meaningful stories]." The final word on the topic came from Jay David Bolter, who stated, "We cannot know the future of fiction, but we can talk about its present condition. What strikes me is the number and variety of narrative media forms today: from traditional novels and films to websites and interactive games. There is no one form around which our culture's creative energies are converging." You can find more from this and several other Siggraph panels at http://www.mrl.nyu.edu/noah/s2000/. Incidentally, this site uses a Java applet for a clever zooming effect that, alas, doesn't seem to work with Netscape.
Siggraph attendees had a chance to participate in interactive storytelling via a fascinating experiment called Terminal Time, created by three researchers whose names I unfortunately neglected to record. In this interactive cinema piece, the computer poses a series of psychographic/demographic multiple-choice questions to the audience, whose members applaud for the chosen answer. The choices that receive the loudest response "win," and thus drive the succeeding cinematic segment. The resultant short movie is a "world" history of sorts, selected from over four hours of digital video, accompanied by a synthetic-voiced narrative. Both are pieced together by an artificial intelligence program based on the audience responses to questions like "Things always work out well in the end (agree, disagree?)."
The creators ran the audience through the exercise twice so we could choose different answers each time, and see how the program changed. Indeed, the results differed vastly between the two sessions. Unfortunately, the synthetic-voice narration, although it was the best the creators could find, was often difficult to understand, forcing the audience to work hard to hear it. Still, it was fairly obvious in many cases how the software slanted the events and sequences of history based on the audience responses, driving home the oft-observed thought that history is never objective. An unexpected side effect was that noting your fellow audience members' choices was often more interesting than the video. Also, it was easy to skew results by clapping loudly, with the result that sometimes, less-popular choices "won" the vote.
It came a day before the actual end of the show, but the session "Phil Tippett's History of Animation" closed Siggraph with a bang for many attendees. Tippett, a veteran creator of cinematic special effects who's worked on epics from Star Wars to Starship Troopers (and beyond), is now the proprietor of Tippett Studio, a special-effects production house in Berkeley, and dreams of someday producing full-length "fake … er, digital" movies. In a mobbed hall, he presented a sequence of film clips designed to give perspective to animators who may have little idea of what's come before. As Tippett showed the clips, he often paused to offer ideas or reminisce about his experiences in the biz. He's a refreshingly candid speaker, and addressed the audience as if he were chatting with an old friend across the dinner table.
Tippett started out with the generally acknowledged father of film animation, George Melies, and then proceeded to show clips from such varied artists as Winsor McKay, Latislov Sterovich (sp?) of Russia, Willis O'Brien (best known for King Kong), the Fleischer brothers, George Pal, and, of course, Ray Harryhausen. A special treat for the audience was Tippett's own tribute to Harryhausen, made for the latter's 80th birthday party, which showed the skeleton warriors from Jason and the Argonauts singing "Happy Birthday." From his own oeuvre, he showed excerpts from Star Wars, Robocop, Dragonheart, and others. At the request of an audience member during the Q&A, Tippett promised to post a list of his animation history recommendations on his Website (http://www.tippett.com/), but it wasn't there yet at the time of this writing.
The high point of any Siggraph is the Electronic Theater, and this year's was no exception. The films, showing a wide range of CG applications, from entertainment to science and engineering, are generally available for viewing later on in a roadshow, so I won't spoil any of the surprises for you. But be sure not to miss "Lucie," with the most realistic character modeling to date, and the hilarious "For the Birds," from the CG humormeisters at Pixar.
Siggraph is as much about the future as it is about the present and past, and nowhere is that more evident than in the Emerging Technologies (ET) exhibit. It provides an ideal opportunity for clever inventors and companies from around the world to show what they've been working on.
Not exactly a trend, but visible at at least two ET booths were "mixed-reality" demos. Mixed reality, aka "augmented reality," is similar to immersive VR in its use of head-mounted displays and motion tracking, but instead of seeing a completely synthetic environment, you're seeing synthesized content overlaid on a real-world background. Japan's Mixed Reality Systems Laboratory Inc. (http://www.mr-system.com.jp/index_e.shtml) mounted an example of its version of MR with a multi-player game called Mission at the RV-Border, where three participants at a time stood around a platform containing a number of large egg-shaped objects. Upon starting the simulation, the eggs took on moving plasma-type patterns, and hatched various animated 3D monsters that hovered momentarily and then charged the players. Players were equipped with a gun that also served as a sword (for more effective close-up combat) or a shield, visible to other players, depending on how you held it. It was a most effective demo, showing how the monsters could hide behind real-world objects, and also cast shadows on real objects.
If you'd like to participate or find out more, the second annual International Symposium on Mixed Reality will take place March 14-15, 2001 in Japan at Pacifico Yokohama. Held jointly with IEEE Virtual Reality 2001, its purpose is to review the progress of current research and define new research goals. ISMR organizers are currently soliciting papers covering augmented reality/virtuality, image-based rendering, geometrical/photometrical registration, 3D/haptic display systems, and related topics. For more info, go to http://www.mr-system.co.jp/ismr/2001/.
And while we're on the topic (more or less), IEEE-VR (http://www.vr2001.org) will take place at the same location March 13-17, 2001. The deadline for papers and participation in exhibits, panels, video, workshops and tutorials is September 1.
Sort of in the mixed reality vein was Magic Book, from Susan Campbell (firstname.lastname@example.org) of the University of Washington's Human Interface Technology Lab. It looks like a physical storybook, but once you put on the special glasses, you see a room rise from the pages, which you can go down into and explore. Campbell's goal with the project is to explore "transitions between physical reality, augmented reality, and immersive reality in a collaborative setting."
It wasn't exactly mixed or virtual reality, but Tom Malzbender's (email@example.com) Microtelepresence, a research project of Hewlett-Packard Labs, showed a fascinating application for the tracking head-mounted display. The HMD was driving a stereoscopic video microscope coupled to a robotically controlled motion platform, which looked down into a flat, transparent container full of exotic-looking insects fresh from the Louisiana Bayou. Since you were looking down from above, you didn't feel like you were actually in the box with the bugs, but it certainly was a new way of seeing our multi-legged friends.
Arrayed around the ET floor were digital full-color, full-parallax holographic stereograms that required no goggles to view. Emilio Camahort's HoloSpace pieces where as large as 3 x 1.2 m, and as deep as 1.2m. Some were actually movies that you could view by moving from side to side.
Plasm: In the Breeze (firstname.lastname@example.org) had a whole room to itself, with two tires swinging over a pleasant synthetic brook. Patterns in the brook were supposed to respond to the swinging motions, but it wasn't working when I visited.
Richard Marks (Richard_marks@playstation.sony.com), a researcher at Sony Computer Entertainment, showed Medieval Chamber running on a PS2. Users could pop virtual bubbles with a real plastic mace, or light a virtual candle with a real plastic torch (not burning, though). According to Marks, the software actually responded to the colors of the "input" devices, rather than their shapes.
Jakub Segen (email@example.com) showed VaRionettes, a vision-based interface that used actions and gestures of the user's hands to control avatar movements. For instance, if you point forward with your index finger, the avatar works forward. While pointing, turning your hand makes the avatar turn, and making a fist stops. If you make a throwing motion, the avatar tosses a ball. It worked quite well.
The DIVERSE (device-independent virtual environments - reconfigurable, scalable, extensible) team at Virginia Tech announced the first beta release of its open-source (GNU LGPL) software API. It provides a common UI to interactive graphics and/or VE programs, a common API to VE-oriented hardware such as trackers and HMDs, and a "remote shared memory" facility that lets data from hardware or software be asynchronously shared between local and remote processes. Download it free from http://www.diverse.vt.edu.
Still futuristic, but a bit closer to practical reality than the far-out exhibits at the Emerging Technology area, is Siggraph's Startup Park. This is a special area of the exhibition floor where young companies with fresh ideas (they hope) get to strut their stuff.
Perhaps the most impressive exhibit here was a preview of the next version of Darktree Textures (I reviewed the current version a while back in 3D magazine). This ultimate toy for algorithmic texture freaks lets you string together texture components, tweaking and animating as you go, for results you simply can't get elsewhere. Version 2, set to ship in December, will offer such new features as:
I spent a bit of time playing with a prototype of Global Haptics geOrb, a 3D computer control device. The handheld sphere reminds one of a geodesic dome, with pushbuttons in each panel and several rocker switches placed about the surface. In the admittedly limited demo, you could push in or pull out areas on an onscreen mesh sphere with the buttons, reversing the effect with a rocker switch. Based on a simple concept that a convex surface covered with tactile sensors effects an intuitive mapping to 3D space, the geOrb offers 3D control without the need for moving parts. It's designed to capture the natural coordination between hand, eye, and brain to move and manipulate 3D objects.
Initial versions will go for $400, but the company says it will ultimately cost about the same as a computer keyboard. Global Haptics is expected to introduce its first commercial geOrb within the next six months. For more information, visit www.globalhaptics.com.
Grain Surgery's plug-in, shipping Q4 for Photoshop ($200) and After Effects ($600), uses a new algorithm for removing unwanted image noise while retaining image sharpness and detail. It also includes a grain creation tool that matches different film stocks.
UK-based CreaToon (http://www.creatoon.com) bills its eponymous software, now in version 1.2, as _the_ software for cut-out (2D) animation. Features include real-time editing, automatic tween generation, unlimited number of layers, spline view for fine-tuning animation, 40-step undo/redo, still/animated textures, and more.
Tactex Controls (http://www.tactex.com) showed its "multi-touch" controller, MTC Express, which uses a "smart fabric" technology. It's basically a touch pad with 256 levels of pressure resolution, but the special feature is that it can recognize several touches simultaneously. It was being demonstrated not only as a graphics input device, but also built into an electric guitar as an auxiliary MIDI input device (e.g., for percussion).
Animator Joe Alter (http://www.joealter.com/), creator of the amusing short Jersey shown at the Electronic Theater, shipped version 1.3 of his Shave and a Haircut hair/dynamics plug-in for LightWave 3D. New features include combing, surface lock, and transplant.
Currently being offered in pre-launch beta mode is from face2face (www.f2f-inc.com), a Lucent Technologies Venture, is alterEGO facial analysis software. The software is designed for the automated lip synchronization and facial animation of characters within television and film production, electronic game and advanced Internet animation applications.
Generating some of the biggest buzz at the show was Sony Computer Entertainment's new GScube, which the company was billing as "16 PlayStation 2s in a box." More precisely, it contains 16 128-bit "Emotion Engine" CPUs, each running at 295 MHz, and each accompanied by 128MB RAM for a total of 2GB, plus 16 I-32 graphics synthesizers. The outputs are combined via "pixel merger."
Sony intends GScube as a graphics "visualizer" for broadband network applications including game development and motion picture production. It needs to be run with a server to feed it data, plus a computer to provide housekeeping. A number of demos at the show had the unit driving 1920 x 1080 displays at 60 frames per second, in some cases with huge databases.
For instance, in one demo I saw, middleware developer Criterion had obtained scene assets from the PDI movie Antz. They were showing a scene animated in real time, with 140 characters at 7,000 polygons each. Criterion said this translates to a million polys/sec, but that they had also achieved, in lab tests, real-time animation with 300 million polys in a scene. Criterion expects to provide its 3D middleware and tools, including RenderWare for 3D programmers and RenderVision for 3D artists, to GScube application and content developers in the near future.
Game developer Square USA showed an amazing Siggraph demo with a high-poly character and environment, complete with realistic hair animation, all in real time. Sony said the product will ship this winter, but has yet to announce pricing. Knowing their history, it'll probably be pretty reasonable. At any rate, GScube has the potential to become one of the most important tools in the digital content creator's armamentarium.
In a related announcement, SGI (http://www.sgi.com) debuted its Origin 3000 series server, being deployed as a broadband server for GSCube. Just as well, since the latter effectively takes SGI out of the visualization biz. The servers individually support up to 512 MIPS processors, and can cluster to tens of thousands of processors.
At Siggraph, NewTek, Inc. announced a new video and audio conversion and encoding tool called Vidget, scheduled to ship in Q3 2000 at $99. Designed for applications such as Internet streaming, DVD or CD-ROM creation, high definition encoding, Web content creation and multimedia production. Vidget supports major video and audio formats including MPEG I, MPEG II, QuickTime and AVI. It also supports NTSC, PAL, 16:9 wide-screen and high definition MPEG II with either fielded or progressive encoding, and with constant or variable bit-rate. AVI support in Vidget complies with Video for Windows, and the QuickTime translations are 3.0+ enabled. It integrates with Windows Explorer so that right-clicking an image, video or audio file in Explorer yields a Convert To... menu, allowing conversion between formats.
Also scheduled to ship from NewTek in (late) Q3 is LightWave 6.1, with the following features:
Lastly, the company announced that it will create versions of LightWave and its Video Toaster products for Intel's Itanium (formerly Merced) 64-bit processor.
Cebas, Digimation Show Max Plug-ins
Digimation was showing off their new Turbo Squid service, a Website (http://www.turbosquid.com) where artists can offer digital assets such as 3D models and texture maps for download by users at relatively low prices. Digimation splits the proceeds 50-50 with its vendors.
Also new at the booth was Light Galleries, a clever plug-in that can help you solve your scene lighting problems. Just add some lights anywhere, say a spotlight and an omni light, tell it how many combinations you want, and it proceeds to render a series of thumbnails with random light placements. You can then choose the ones you like and use a special editor to combine them in any ratio, with optional colorizing. Once you've made your decision, Light Galleries populates your scene with the light sources that worked, and you're off and running.
Max doesn't have built-in cloth, so you can use Digimation's new Stitch plug-in to clothe your characters. Its GarmentMaker feature lets you design and create digital fashions by creating traditional flat patterns and panels from splines. Once clothing is positioned on your character, Stich can automatically drape and gather the material for a realistic fit. Other features include the ability to "sew" cloth objects atop each other for pockets, etc., plus the ability to contrain cloth vertices to other objects in the scene.
The company is offering special Siggraph pricing on these and other plug-ins until August 31 at http://www.digimation.com/.
German plug-in producer Cebas demonstrated its new BunchOfVolumes plug-in ($195), which offers advanced lighting capabilities. These include colored shadow maps, colored volume shadows, real-time volumetric effects, area lights and fast area shadows, area shadow maps, and caustic lighting effects. Cebas also announced PyroCluster as a plug-in for Cinema4D, a distinctively European 3D app. PyroCluster is a 3D "volume tracer" used for creating vaporous effects such as smoke and clouds. It's scheduled to ship in Q4 '00.
Meta Motion, Famous Show Full-Body Motion Capture
Meta Motion, along with Famous Technologies, demonstrated a full-body motion capture solution using the former's newly introduced Gypsy 3 product. Meta Motion showed skeletal and character motion capture with Gypsy 3 in conjunction with Kaydara's FiLMBOX and DreamTeam's Typhoon. This was combined with facial mocap using the Animazoo Face Tracker and Famous Technologies' FAMOUSfaces facial motion capture and animation software. 5DT completed the loop using its DataGlove to capture hand movements.
Gypsy 3 features include:
Gypsy pricing starts at $25,000, and offers a capture rate of up to 120 fps.
Also, Famous Technologies introduced F-A-S-T (FAmous STreaming), its technology to stream 3D facial animation over the Internet, in real time, within Web pages. Viewable with standard browsers via standard 28K modem connections, F-A-S-T is said to be optimized for encoding, compressing, and transmitting voice, facial motions, and models. F-A-S-T will be part of FAMOUSfaces2, priced at $4,990 and expected to ship Q3 2000.
Softimage showed the upcoming XSI 1.5, which will include polygonal modeling, a dope sheet, Booleans, and subdivision surfaces. The company also demonstrated an integration of the Motion Factory technology (an interactive character walking around under user control, with path planning and real-time IK blending). The message was: "Behavioral animation will make linear animators more productive by enabling the interactive 'sketching' of animation, and will give interactive animators sophisticated real-time characters for gaming." The XSI real-time 3D viewer is initially being positioned as a data and artwork validation environment providing a workable solution to the real-world problems of game developers on the PS2 and other platforms. Soft also announced exporter support for Macromedia and Metastream for 3D Web platforms.
Alias|Wavefront Previews Maya Fusion 3, Will Port Maya to Linux
Alias|Wavefront previewed Maya Fusion 3 ($5,000) at Siggraph 2000. Key features in the new version, scheduled to ship this fall for Windows NT/2000, include:
Also, Alias|Wavefront plans to port the entire Maya 3D software product line to the Linux platform, and says it will deliver Maya on Red Hat Linux in early 2001. The company attributes its ability to do so to recent advancements in the graphics libraries available for Linux.
project:messiah Launches New Identity, Software
Character animators might like to know that the ex-LightWavers at newly named company pmG (project:messiah Group) demonstrated the new UI for messiah:animate 3.0 ($895), expected to ship later this year as a stand-alone version of the LightWave plug-in. The new software will sport a customizable faster interface, an SDK to let developers create their own effects and applications, a non-linear animation editor, support for Renderman RIB files, and real-time particles that react to forces, gravity and wind effects. The company also showed examples of messiah:render ($995), a joint development between pmG and "international render guru" Marcos Farjardo.
Reflex Shows Drama 3D Character System
I was given a brief preview of Reflex Systems Drama system for human animation, but the software was in alpha status and not much was working. Based on "Reflex-DNA," the software is designed to let the user build models based on descriptions of traits such as bones, muscle, fat, skin, hair, and clothing, instead of dealing with polygon mesh data.
The main thing I could see was that it's really designed for non-techies, so that when you change an aspect of a model--for instance, using a different-shaped head from the built-in library--the rest instantly adjusts to accommodate the modification. Company founder/chief developer Jean Nicholson Prudent also mentioned that a number of game companies are interested in the product.
It looks to have a lot of promise, but it's way too early to be able to evaluate its actual value in real-world production. Also, I was curious about how characters could be used with other 3D apps such as Maya, because the Reflex tech is so unlike anything else out there. Prudent promised that data could be brought in from other programs, but it remains to be seen how effective that can be.
Curious Labs Announces Poser Pro Pack
Coming this fall from Curious Labs, Inc. is the Poser Pro Pack ($149), a set of new plug-ins and application enhancements that expand Poser 4's feature set. Features include:
Using new plug-ins included in the Poser Pro Pack, Poser 4 will be able to share character animation, figure bending, scene and key frame data for integration within LightWave and 3D Studio Max R3.
Web developers will be able to export Metastream 3 files. For Web-deployable 2D content, the Poser Pro Pack will export vector converted Flash animations from any Poser scene.
Discreet Shows New Products/Technologies
Autodesk division Discreet introduced the latest versions of several of its software applications, and announced a new platform for the game community.
The company presented a technology preview of the next major release of 3d studio max (note lower-cased name, now consistent with existing Discreet products), including character animation advances such as a new IK architecture with IK solver plug-ins, weighted constraints, and custom manipulators. Visual effects-production capabilities include fast interactive shading, multi-layered rendering in single passes, and integration throughout Discreet's product line. Game development advances include DirectX 8 abilities, advanced modeling and texturing across subdivision surfaces, patches, and polygons, and custom attributes for bringing data, interface and output together for seamless coordination.
Also previewed at Discreet's booth was an initiative code-named 3d studio gMAX, based on the upcoming 3d studio max platform that will deliver 3D tools to game players everywhere. The new technology is designed to give consumers tools to create new levels and individualized characters for popular game titles.
Discreet also launched character studio 3, the latest version of the company's character-animation plug-in. Among the new features are crowd control and behavioral-based animation, fast Physique skinning and enhancements to the inverse kinematics tools.
Advances to visual effects systems inferno, flame, flint and effect let 3d studio max models and associated 3D data migrate into the advanced Discreet systems for integration of 3D into live action sequences.
combustion, Discreet's software for the Macintosh and Windows NT platforms, and 3d studio max combine on the desktop or across a network to share 3D data through Discreet's "Rich Pixel Format" (RPF). The combustion workspace is accessible from within 3d studio max providing the animator with vector paint tools and a compositor for creating or fine tuning any map within the scene with synchronized timelines.
Lastly, Discreet announced various Web 3D-related initiatives in conjunction with other companies, including:
3Dlabs, Mitsubishi to Integrate Voxel, Polygonal Rendering
3Dlabs and Mitsubishi Electric division Real Time Visualization (RTViz) will collaborate to produce a new class of advanced visualization applications that will blend polygonal graphics and voxel-based volumetric rendering. The first results of this collaboration will be optimized drivers from 3Dlabs for its Oxygen family of graphics accelerators that enable the real-time integrated display of OpenGL graphics and volumes rendered on RTViz VolumePro accelerators. These drivers are expected to ship from 3Dlabs in the third quarter of 2000.
Voxels (volume elements) represent the internal structure of objects acquired from real world sample data such as CT, MRI, Ultrasound, sonic, radar and other discrete acquisition methods. Polygons define surface characteristics and are represented as geometrical mesh triangles. Some users would like to be able to render and visualize both primitives together in real time. For example, doctors need to visualize the scalpel (polygons) inside the brain (voxels) and geologists need to visualize well-head data (polygons) inside the earth's fault subsurface strata (voxels) as they search for oil.
Metacreations to Acquire Viewpoint Digital
In a surprise announcement at the show, MetaCreations Corporation said it would acquire Viewpoint Digital, currently owned by Computer Associates International. Financial terms were not disclosed. MetaCreations said the acquisition will increase its sales force and interactive content creation capacity, and give Metastream access to clients such as Neiman Marcus, Volvo, Boeing, Hasbro, NBC, CNN, Ford and General Motors.
CA will continue to work with MetaCreations and will retain its 20 percent ownership stake in Metastream to provide visualization solutions to enterprise clients. CA also will retain a license to use the Viewpoint technology.
Systems in Motion Releases Coin v1.0 Beta
Oslo-based Systems in Motion released Coin v1.0 Beta, an open source implementation of the Open Inventor v2.1 API for high-level 3D graphics. Coin is a high-level development tool for 3D graphics applications. Basic file import, rendering and interaction with a 3D object can reportedly be implemented in a few lines of code with Coin's scene-graph oriented class library. Coin has an application programmers' interface in C++, and uses OpenGL or Mesa for accelerated rendering. Coin is currently supported on MS Windows 95/98/NT/2000 in addition to Linux and a variety of other Unix'es.
Systems in Motion also announce its Coin Free Software Programming Competition. The competition is open until November 30. First prize is US$5,000, with second and third prizes of US$1,000 each.
Cycore Acquires PuppetTime
Cycore, a developer of interactive 3D tech for e-business, will acquire PuppetTime, a San Francisco-based developer of 3D storytelling technologies. Specific financial terms were not disclosed.
PuppetTime Producer is a 3D character-animation program that lets Web developers create 3D movies using an underlying "cast" of digital actors. The user can direct movies by selecting a digital actor, typing dialog, recording sounds and voice-overs, and choosing actions and emotional states. The puppets and their characteristics are pre-defined.
REM Infografica Reborn as Reyes Infografica
It was nice to see that Spanish plug-in wizards of REM Infografica, which went out of business last year, were at the show as Reyes Infografica (http://www.reyes-infografica.com) with their full line of character animation and cloth software. They also announced ReyesWorks.com, their middleware visual platform that runs on Windows and Linux, and has been optimized for Intel's 64-bit Itanium processor. The company will release as open source 400,000 lines of source code comprising cloth simulation, software agents, NURBS, low-bandwidth organic modeling, voice recognition-driven lip-sync, character animation and more as the basis for a project of collaborative development.
Web3D Consortium Announces New Board, X3D Advances, SDK
The Web3D Consortium announced the results of the annual election of Directors. The newly elected corporate board members are Walter Schwarz (blaxxun interactive), Sandy Ressler (National Institute of Standards and Technology), Don Brutzman (Naval Postgraduate School), Rick Rafey (Sony Corporation), Martin Reddy (SRI International), Rob Glidden (Sun Microsystems) and Neil Trevett (3Dlabs). The newly elected Professional board members are Matthew Beitler, Nicholas Polys, Michael Wagner, and Joe Williams. Trevett was re-elected President of the Consortium.
Brutzman reported on the progress of the Consortium’s X3D Working Group. “Three dozen people from 25 companies and universities attended the Sunday meeting of X3D contributors in New Orleans. They expressed strong interest in rapid completion of the baseline X3D implementations and API. Successful delivery of the open-source example implementation is now planned for this fall. Opportunities for volunteer partners and programmers to participate are still available.”
The Consortium also announced its Summer2000 Software Development Kit (SDK) containing developer builds of open and community sources maintained by the Web3D Consortium, X3D implementations and tools, an X3D conformance suite, as well as general Web3D media tools and content developed by Consortium members.
Flatland Announces Open Source Release of 3DML
In an attempt to promote its Web 3D format, Flatland Online has released the source code of its 3DML (Three Dimensional Markup Language) Web publishing format.
The release of the 3DML source code follows the release earlier this year of Flatland's Blockset format, which comprises the format's building blocks. Flatland builders can use it to build 3DML spots based on blocksets of their own design.
Hypercosm Debuts Player Upgrade
Hypercosm, Inc., a provider of Internet 3D simulation technology, announced the release of its free Hypercosm Player v. 2.0. New capabilities include:
Mendel3D Introduces Internet Platform
Coming this fall from French company Mendel3D is a suite of brand-new tools for Web 3D including MendelBox Publisher, MendelBox Avatar, MendelBox Factory and Mendel Storyboarder. The company promises to deliver an easy-to-use platform for natural and dynamic animation through light files adapted for a low-bandwidth and existing PC configurations.
Inspired by the genetic theories of scientist Gregor Mendel, Mendel3D says it has defined new rules using genetic laws to animate images by nature. As a result, the engine, and the object the engine is applied to, have been conceptually defined together, at the same time.
When a figure needs to move, most 3D applications create a static figure and then add commands to make it move. With Mendel3D, all the physics and math that are needed to make a figure move are incorporated in the core definition.
ParallelGraphics Launches 3D Software Suite
Also trying to break into the Web3D market is ParallelGraphics, which showed Internet Model Optimiser (IMO), a first-generation CAD optimizer, and Cortona Jet, a cross-platform Java applet for viewing VRML scenes. These new tools complement ParallelGraphics' existing range of software products.
Based on complex mathematical algorithms, IMO reportedly optimizes objects on a shape-by-shape basis, whilst maintaining the visual aspects of objects.
Cortona Jet is a small, cross-platform Java applet that enables any standard Web browser with a Java virtual machine to display 3D scenes without the need for special plug-ins.
Caligari Announces trueSpace5
Caligari Corporation announced trueSpace5, a new version of its 3D modeling and animation program coming this fall. New features include:
Spectrum is an independent news service published every Monday for the interactive media professional community by Motion Blur Media. Spectrum covers the tools and technologies used to create interactive multimedia applications and infrastructure for business, education, and entertainment; and the interactive media industry scene. We love to receive interactive media and online development tools and CD-ROMs for review.
Send your interactive multimedia business, product, people, event, or technology news to: firstname.lastname@example.org. We prefer to receive news by email but if you must, telephone breaking news to 510-549-2894. Send review product and press kits by mail to David Duberman, 2233 Jefferson Ave., Berkeley, CA 94703.
If you contact companies or organizations mentioned here, please tell them you saw the news in Spectrum. Thanks.
Please send address changes (with old and new addresses), subscribe and unsubscribe requests etc. to the above address. If you use the Reply function, please do _not_ echo an entire issue of Spectrum with your message.
Publisher's note: We are now accepting limited advertising. If you'd like to offer your company's products or services to Spectrum's elite audience of Internet and multimedia professionals, send an email query to mailto:email@example.com, or telephone 510-549-2894 during West Coast business hours.
- David Duberman
©Copyright 2000 Motion Blur Media. All rights reserved. No reproduction in any for-profit or revenue-generating venue in any form without written permission from the publisher.
|Site Design and Hosting|