From the Lab: Information Technology
Digital lighting after movie-shooting, a more-intelligent Web searching tool, and audio ‘thumbnails’ to find online music faster
Graphics technique allows movie-scene lighting after filming
Results: Researchers led by Paul Debevec at the University of Southern California’s Institute for Creative Technologies have developed computer graphics tools that let filmmakers simulate the live-action lighting conditions of settings that their actors were never in, or add new lighting effects to film they’ve already shot. The researchers previously showed that they could change lighting effects in still images.
Why it Matters: Movie directors use computers to adjust and create visual effects, but for the most part, they can’t tinker with lighting. That means they have to get the lighting just right during filming – a time-consuming and expensive process. The ability to change or re-create lighting after a performance can give filmmakers more flexibility in making the movies they want, while potentially saving time and money on the set.
Methods: The researchers placed an actor inside a spherical structure two meters in diameter that was lined with 156 bright LED light sources. As the actor performed, different lights flashed on and off thousands of times per second, either singly or in groups. A camera filmed the actor at a frame rate equal to the rate at which the lighting changed, so that each frame was lit in a different way, for a maximum of 180 different illumination conditions. The researchers filmed the actor’s head and shoulders, recording up to eight seconds of action; downloaded the information to computers; and used algorithms to select and superimpose different frames to create desired illumination effects.
But there was a problem. Although the actor was filmed at a high frame rate, and the lights flashed just as quickly, the actor still moved appreciably while each of the 180 lighting conditions was being captured. This meant that the position of the actor differed slightly in each frame, so superimposing the frames resulted in smeared images. To solve this problem, the researchers used computer vision algorithms to track and analyze the actor’s facial movements. Based on estimates of how the actor was moving in a given set of frames, they digitally warped the image data to make it look as if each of the 180 frames was taken at the same instant. They repeated this process to produce a set of frames showing the 180 individual lighting conditions for each 24th of a second of the actor’s performance, which they then assembled to produce the final film clip with the computer-generated lighting.
Next Step: The researchers would like to build a larger spherical structure with a greater number of brighter lights that could capture images of an actor’s whole body or of more than one actor at a time. They are also working on finding the best pattern in which to flash the lights on and off so as to obtain the optimum image quality while minimizing the appearance of flickering. – By Corie Lok
Source: Wenger, A., et al. 2005. Performance relighting and reflectance transformation with time-multiplexed illumination. ACM Transactions on Graphics 24:756-64.
Streamlining retrieval on the Web
Results: Some Web-search sites like Clusty and Teoma sort results into categories to help users narrow their searches.Researchers at IBM have devised an algorithm that allows search programs to display a wider selection of categories by analyzing the content of a sample of results rather than that of every page. The researchers performed searches of 1.8 million Web pages, analyzing both the entire body of results and the sample populations selected by the algorithm. They found that even when samples constituted only 1 percent of the total results, the algorithm could still capture most of the popular categories extracted from all the results.
Why it Matters: Looking for information online can be frustrating when search terms have multiple meanings and contexts. Sorting results into “clusters” of related topics can help cut search times, but most search engines that use this technique examine only the most relevant few hundred results to extract common themes. So even topics with plenty of pages devoted to them can be ignored in favor of trendier subjects associated with the same keywords: a search for “macintosh” will identify themes prominent on millions of computer-gossip pages but entirely miss those few thousand pages about Charles Macintosh, father of the rubberized raincoat. The sampling methods devised by Aris Anagnostopoulos, now at Brown University, and Andrei Broder and David Carmel at IBM could allow users to quickly find the pages they want, even when their search terms are ambiguous.
Methods: In a large search, collecting a representative sample is not easy. Most search engines assemble results not all at once, but a handful at a time as needed. They first generate a list of matching pages for each keyword in a query. Those lists are merged, about a hundred results at a time, using logical operators extracted from the query – words such as “and” and “or.” The IBM algorithm, on the other hand, simultaneously sifts through these multiple lists, picking Web pages at random and, if they meet all the conditions of the search, adding them to the sample pool. The algorithm takes measures to ensure that each Web page in a list has an equal probability of being chosen. A search engine could use the sample pool to determine sorting themes.
Next Step: Devising custom sampling techniques to handle the most common types of queries could yield speedier search results. Anagnostopoulos is also interested in investigating whether, when devising sorting categories, giving less popular pages even more weight leads to better results. – By Dan Cho
Source: Anagnostopoulos, A., et al. 2005. Sampling search-engine results. Paper presented at the 14th International World Wide Web Conference. May 10-14. Chiba, Japan.
Digital fingerprints make for easier searching
Results: Microsoft researchers have developed software that can automatically identify audio files–including streaming audio–by extracting and encoding short sections of them to form “fingerprints.” Christopher Burges and colleagues have developed two new applications for this audio-recognition technology: identifying duplicate files in a large collection of audio files and creating “thumbnails,” 15-second-long, recognizable snippets of each file. The software found duplicates in a database of more than 40,000 audio files with a 1.2 percent error rate. In another test involving 68 songs, a panel of users compared thumbnails made with the Microsoft software with snippets of the songs beginning 30 seconds in, and rated the Microsoft thumbnails more likely to contain the titles, choruses, or other distinctive features of the songs.
Why it Matters: Today’s digital-audio libraries are growing in size, and users must manually sort through them to find and remove duplicate files. Microsoft’s method of spotting duplicates could make for easier and faster consolidation of large song collections. Many online music purveyors also offer their customers previews of songs. Currently, those previews are created either manually–someone listens to the song to find a recognizable chorus, then makes the song snippet–or via software that samples only a predetermined segment of each song, which may not contain readily recognizable material. The new software can automatically find the defining part of a song when extracting a thumbnail, making the thumbnail a better indicator of the song’s identity.
Methods: The duplicate detector extracts a fingerprint for each file and puts it into a database. To compare two songs, it considers the location from which the first song’s fingerprint was extracted and looks for a matching fingerprint in the same vicinity in the second song. If it finds a match, it identifies the two as duplicates. After analyzing all the songs in the database, the detector presents the user with a list of duplicate songs.
The thumbnail generator compares fingerprints within a file. If it finds similar fingerprints at different points, it identifies them as the song’s chorus or some other characteristic feature. If fingerprint analysis doesn’t find a clear repeating feature, the software can analyze other aspects of the song, such as patterns of sound frequencies, to pick out a characteristic section. The software then extracts the 15 seconds of audio surrounding that section as the thumbnail.
Next Step: The researchers are working with Microsoft’s product teams to commercialize this technology. Potential applications might include software that cleans up music collections on home computers, freeing up disk space. Online music vendors could also use the thumbnail generator to create previews of the songs offered on their websites. – By Jean Thilmany
Source: Burges, C., et al. 2005. Using audio fingerprinting for duplicate detection and thumbnail generation. Paper presented at the IEEE Conference on Acoustics, Speech, and Signal Processing. March 18-23. Philadelphia, PA.
Become an MIT Technology Review Insider for in-depth analysis and unparalleled perspective.Subscribe today