Skip to Content
Uncategorized

Cycorp: The Cost of Common Sense

How do you survive when it could take decades to build your product?
March 1, 2005

Ask most companies how they bring value to the market and they’ll point to their products. Cycorp is a bit different. The 10-year-old company cares about the services it sells – but mainly because they bankroll its true quest: creating a “knowledge base” called Cyc that can endow computers with something approaching common sense. This quest has been so time-consuming that most venture capitalists would long ago have written off their investments – or demanded the CEO’s head on a platter. That Doug Lenat and his 54 employees have avoided this fate is a lesson in managing long-term, visionary R&D projects.

Two decades ago, Lenat was a computer science professor at Stanford University with a dream of building a com­puter smart enough to know, for example, that people are smaller than houses and live in them. But he feared that, with just himself and a half-dozen grad students, programming such a computer would take more than a lifetime. Meanwhile, American high-tech leaders were worrying that the Japanese, with their so-called “fifth generation” artificial intelligence development project, would do the same thing to the American computer industry that they had done to the automotive and consumer-electronics industries. So they set up a research consortium called the Microelectronics and Computer Technology Corporation (MCC). Lenat snatched at the backing offered by MCC, and went to work for MCC in 1984.

During its first decade, Cyc project managers merely had to report what they were doing to MCC once or twice yearly, recalls Lenat. That long-term backing was important because Cyc’s creation involved inputting and organizing the millions of facts that, while seemingly obvious to humans, must be explicitly taught to computers in the logic they can understand. After reaching a certain level of sophistication, Cyc began to help direct its own education by asking questions based on what it already knew. (Lenat hopes that Cyc will eventually be able to read unassisted.) The result: a computer that doesn’t have to be told that parents are older than their children and that people stop subscribing to magazines after they die.

Cyc’s first big step into the real world came in 1994, when the project was spun off from MCC as an independent company, Cycorp. The challenge: how to keep funding a project that was still years, if not decades, away from commercialization. “In 1996, we got our first substantial government contract,” Lenat recalls. Since then, Cycorp has collected about half of its revenue from U.S. government agencies and the rest from companies, mostly for building “semantic maps” that help users pull information from various databases with a single query. By taking on paying projects, Cycorp has been able to stay profitable and debt-free. All of the firm’s stock is owned by its employees, making Cycorp answerable only to Cycorp. “But,” Lenat admits, “we have had to tack with the funding winds. Maybe 50 percent of the funding we get pushes us forward in the direction that we need to go.”

Cycorp doesn’t even want to be distracted by the rigors of the retail software business; instead, it licenses Cyc for use in third-party software packages. A slimmed-down Cyc is available free to research organizations, and OpenCyc, an even smaller version suitable for desktop computers, is available as a free download. Lenat hopes that hobbyists will start adding terms, some of which would eventually be culled into the Cyc knowledge base, giving it grassroots input – and also establishing Cyc as the de facto artificial-intelligence knowledge base.

“The knowledge in Cyc has gotten quite good,” says Ken Forbus, a professor of computer science at Northwestern University and a current user of ResearchCyc. “Is it perfect? No. Is it comprehensive? No. Is it broader than anything else out there? Yes.”

The time may come, Lenat says, when a greatly expanded Cyc will underlie countless software applications. But reaching that goal could easily take another two decades.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.