HEADLINES AT A GLANCE:
Hackers Hit Large Hadron Collider Web Site Computerworld (09/12/08) Keizer, Gregg
The European Organization for Nuclear Research (CERN) says it has revived
a Web site for the Large Hadron Collider (LHC) that was attacked by
hackers, although it remains off limits to the public. CERN says the Web
site was only defaced, as hackers temporarily replaced the Web site with a
message. The network did not suffer any permanent damage, and no other
files were installed on the science project's computers. "It was benign,
but it reminds us that we need to be vigilant," says James Gillies of CERN,
which operates LHC. "And no harm was done to the experiment or its
computer network." Turning on the LHC for a test search of particles that
make up dark matter has generated some controversy, in that some people
have argued that it would create a black hole that could destroy the
planet. A report in the U.K. newspaper the Telegraph says a group called
the Greek Security Team (GST) has claimed responsibility for the attack.
Turning Social Networks Against Users
Several research projects have explored the viability of distributing
malicious software through social networks. At this week's Information
Security Conference in Taipei, Taiwan, researchers from the Foundation for
Research and Technology Hellas (FORTH) in Greece will present details of an
experiment that enlisted Facebook users in a potentially devastating
Internet attack. The researchers created an application that displays
photographs from National Geographic on a user's profile page, but also
requests large image files from a target server. If enough people added
the application to their page, the flood of requests could shut down a
server or render it inaccessible to legitimate users. FORTH research
assistant Elias Athanasopoulos says the researchers made no effort to
promote their application but 1,000 Facebook users installed the
application within a few days. The resulting attacks, launched against a
server the researchers established to receive the attacks, were not severe,
but Athanasopoulos says they could disrupt a small Web site, and they could
be made more intense with a few minor adjustments. A more detailed
analysis of different social networking sites, by computer-security
consultants Nathan Hamiel of Hexagon Security Group and Shawn Moyer of
Agura Digital Security, found that the potential for damage is far more
severe. The two built examples of malicious applications on top of
OpenSocial, an open application platform used by MySpace, Orkut, and
several other social networking sites. One of the demo applications,
DoSer, logs out users who view a compromised page for several seconds.
Another, CSFer, sends unauthorized friend requests from the target users.
Hamiel says there are many more ways to attack social networks and there is
little that can be done to defend them.
When Disaster Strikes
The European Union-funded Sensor and Computer Infrastructure for
Environmental Risks (SCIER) program has developed an early detection and
warning system for natural disasters. SCIER uses state-of-the-art
automation to detect disasters in the making, forecast how an emergency
will probably unfold, alert authorities, and provide information
authorities need to respond effectively. SCIER researchers deployed
networks of ground-based sensors, including video cameras, meteorological
instruments, and river-level gauges in high-risk areas, particularly in the
"urban-rural interface" where homes and businesses are in close proximity
to undeveloped areas. The ground-based sensors are wirelessly linked to a
local area control unit, which structures and compares the raw data to
check for anomalies, such as if a temperature spike at one sensor is
reflected in nearby sensors. SCIER technical coordinator Sotiris
Kanellopoulos says the system should be able to understand when there is an
erroneous measurement to prevent false alarms. When the local area control
unit detects a real threat, it activates SCIER's computational
armamentarium to forecast how the emergency will most likely develop over
the first few critical hours. Kanellopoulos says the system is not
designed to simulate fires or other disasters for days, but it can simulate
the first few hours. The system uses sophisticated mathematical models of
how natural disasters unfold based on detailed information about the local
geography along with real-time sensor data concerning wind, rainfall,
temperature, and other variables.
Developing Robots With Human-Like Behavior
The European Science Foundation and the Japan Science Foundation are
encouraging young researchers to develop robots capable of conforming to
situations and physical movements in ways similar to humans. Earlier this
year, the two groups co-hosted an event to unite young researchers from the
fields of robotics and cognitive science with the goal of promoting a new
generation of intelligent machines. Gottingen University professor
Florentin Worgotter gave a speech in which he emphasized that gaining
greater insight into how animals coordinate their movements could help
researchers transfer those principles to robots and their development.
University of Tokyo professor Yasuo Kuniyoshi says conventional methods
based on artificial intelligence techniques developed since the 1980s have
failed to produce adaptable robots, which would require techniques that
break down events a robot has not been programmed to expect into smaller
parts in an attempt to analyze them. The event also focused on the
importance of communication channels between humans and robots, regardless
of how robots receive instructions. Swiss Federal Institute of Technology
professor Aude Billard says enabling robots to interpret a person's
intention and predict their action will help researchers meet the challenge
of getting robots to imitate simple human gestures.
Making Every Word Count
Determining the most commonly used words in the English language is a
challenge, not least because the very systems that are supposed to make
this research simpler, such as computers and the Internet, can distort
language patterns, writes Carl Bialik. Language corpora need to be
well-balanced so that the diversity of text is fairly represented without
having certain definitions of words with multiple meanings predominate,
says Princeton University linguist Christiane Fellbaum. The current state
of affairs has the rankings of words differ across different corpora.
"It's easy to build bigger collections using the Web, but that gives short
shrift to genres that don't often make it online, notably fiction," Bialik
notes. "It also ignores spoken words, which are underrepresented in
corpora because they are so much harder and more expensive to collect."
Important subtleties can be missed if there is an insufficiency of
spoken-language data, and the cost of collecting spoken words greatly
exceeds the cost of collecting written text. Vassar College's Nancy Ide,
manager of the American National Corpus, says Web-based corpora cannot be
fully shared and analyzed by researchers without copyright permission for
all of the text they contain. She also cites the difficulty of isolating
American English from British English and other variants online.
Microsoft, which uses corpora to correct misspellings in its Word program,
has licensed over one trillion words of English text in each of the past
two years. It also collects text from Hotmail email exchanges, removing
any identifying information, to bolster its spell checker. "Text corpora
is the lifeblood of most of our development and testing processes," says
Microsoft's Mike Calcagno.
IBM Testing Voice-Based Web
IBM's India Research Laboratory's (IRL's) Spoken Web project would make
Web-based information accessible to those unable to read or write, or to
those without access to the Internet. The Spoken Web project takes
advantage of the rapid deployment of mobile phones in countries such as
India, where PC-based Internet penetration is not as high as that of mobile
phones. IRL director Guruduth Banavar says the goal is to ensure that
everything that is done on a Web browser on a PC can be done with a mobile
phone. Spoken Web technology will enable local communities to create and
disseminate locally relevant content and interact with Web sites by
speaking on a phone, Banavar says. Spoken Web uses the Voice eXtensible
Markup Language (VoiceXML) and the hyper speech transfer protocol to mirror
the Web in a telecom network. The technology enables users to create and
browse VoiceSites that have their own uniform resource locators, follow
VoiceLinks, and conduct business transactions and online purchases. Users
will be able to access the Spoken Web using a toll-free number, and
VoiceSites can be created over the phone using a set of templates on the
server site, Banavar says. The Spoken Web potentially could be linked to
the Internet, but sites on the Web would have to be converted to support
spoken interfaces, both through VoiceXML and by how Web content is designed
and presented.
Clemson University Turns Idle Computer Time Into
Solutions for World Problems
Clemson University is donating the idle computer cycles of computers in
the university's instructional labs to the World Community Grid (WCG). The
nonprofit WCG, managed by IBM, aims to create the largest public computing
grid to benefit humanity. IBM says Clemson's School of Computing
contributes more than four years of CPU time every day, meaning that
approximately 1,500 Clemson computers work on WCG problems daily. On some
days, Clemson is the first in the nation, and as high as fourth in the
world, for contributions to the WCG. "Most computers at universities are
underutilized. For instance, at night when everyone sleeps, the computers
are idle," says Clemson professor Sebastien Goasguen. "By joining WCG, we
maximize our utilization by virtually donating computers when we don't use
them. In doing so, we contribute to humanitarian causes." Almost 400,000
teams from around the world donate computing time to the WCG.
Google Outlines the Future of Search
Google's Marissa Mayer expects to see significant advances in search
technology over the next decade. "These include mobile devices offering us
easier search, Internet capabilities deployed in more devices, and
different ways of entering and expressing queries by voice, natural
language, picture or song, to name just a few," she says in a blog post.
Search technology could even take a user's contacts or location into
consideration as it delivers results. "Maybe the search engines of the
future will know where you are located, maybe they will know what you know
already or what you learned earlier today, or maybe they will fully
understand your preferences because you have chosen to share that
information with us," Mayer says. She even envisions people wearing a
device that would be capable of listening to conversations, performing
searches in the background, and then returning the relevant information to
the user. Mayer says Google should focus more on translation services,
which would help make the Web available to everyone regardless of
dialect.
Putting a 'Korset' on the Spread of Computer
Viruses
Tel Aviv University professor Avishai Wool and graduate student Ohad
Ben-Cohen have developed Korset, an open source antivirus program for
Linux-based servers. "We modified the kernel in the system's operating
system so that it monitors and tracks the behavior of the programs
installed on it," Wool says. Wool says Korset provides a model for the
operating system kernel that predicts how software on the server should
run. If the kernel detects abnormal activity, it stops the program from
working before malicious actions can occur. Wool says their solution is
much more efficient and does not consume as many resources as traditional
antivirus software. "There is an ongoing battle between computer security
experts and the phenomenal growth of viruses and network worms flooding the
Internet," he says. "The fundamental problem with viruses remains unsolved
and is getting worse every day." Wool's research was presented at the
recent Black Hat Internet security conference.
Microsoft's Lab Cooks Up Photo Collage Program
Microsoft's U.K.-based research lab has released AutoCollage 2008, new
software that automates the process of creating photo collages. The
company has applied its facial recognition and blending technologies to
AutoCollage 2008, which makes assembling photos into a collage easier and
faster. AutoCollage 2008 is designed to find representative images and
avoid repetitive features, and match images so they seamlessly flow into
one another. Users can resize photos and then print them. Alisson Sol,
the development manager for the project, says a Web service is "technically
possible," but users with a low-bandwidth capacity who upload a number of
large photos could encounter some problems with the program.
Researchers Develop Automated Cell-Screening
System
Researchers at Carnegie Mellon University's Ray and Stephanie Lane Center
for Computational Biology have developed an automated system that can
analyze images of cells much faster and more accurately than humans. The
technique, developed by robotics and machine learning professor Geoffrey
Gordon, computational biology professor Robert Murphy, and recent Ph.D.
graduate Shann-Ching Chen, will be capable of analyzing more than 100,000
cells, and will be able to establish relationships between those cells.
Gordon says it is easier to classify cells by studying a group of cells
instead of just a single cell, though it requires the computer to account
for a large amount of information, which is difficult and time consuming.
To solve this problem, the researchers developed a new reasoning
methodology that is potentially far faster at reasoning through entire
groups of cells. Murphy says the system describes the distribution of a
given protein in each cell using numerical features, and then learns which
features are associated with which subcellular pattern, similar to how
humans may use color, smoothness, and shape to distinguish between fruits.
The system relies on the protein distribution in cells to classify them
into different groups. The researchers say the system, which also can
recognize subtle differences between cells, could significantly change how
biological research is done.
Threat From DNS Bug Isn't Over, Experts Say
Security experts have only temporary solutions so far for the critical DNS
flaw, which if exploited on a large scale could bring down the entire
Internet. Since vendors simultaneously installed a patch after IOActive
researcher Dan Kaminsky discovered the flaw earlier this year, the number
of servers vulnerable to an attack has dropped dramatically, from over 85
percent to less than 30 percent. Still, experts warn that the patch is a
temporary fix and only hinders attackers from exploiting the flaw. "What
we've got out there so far are truly Band-Aids," says Alan Shimel, chief
strategy officer at StillSecure, the firm that has been monitoring the
vulnerability since its discovery. "There are questions on how to move the
solution to the firewall level. We need a new DNS." Attackers are already
trying to find ways around the patch. MessageLabs analyst Paul Wood has
seen a surge in traffic by hackers searching out systems with unpatched
vulnerabilities.
Disruption-Free Videos
Researchers at the Fraunhofer Institute for Telecommunications,
Heinrich-Hertz-Institut (HHI) in Berlin, have created a new video coding
format that uses additional data to protect the most important data packets
during transmission. The extension of the H.264/AVC coding format allows
digitally transmitted images to be broadcast without error, as the
additional data packets would ensure that only the quality would be
affected if any information is lost. "If, say, two video packets need to
be transmitted, we equip an additional data packet with the result of the
sum of the bytes in the two video data packets," says Thomas Wiegand with
HHI and also a professor at the Berlin Institute of Technology. "If any of
these three data packets gets lost, we can deduce the content of the
original two." Called scalable video coding (SVC), the new format runs on
all H.264/AVC-compatible devices, and independently of overall data volume.
SVC standardization can be used for HDTV, the Internet, videoconferences,
surveillance technology, or mobile radio.
Watch and Learn: Time Teach Us How to Recognize Visual
Objects
Massachusetts Institute of Technology (MIT) neuroscientists have tricked
the brain into confusing one object with another, demonstrating that time
teaches us how to recognize objects, a discovery that could help develop
more brain-like computer vision systems. Human eyes never see the same
image twice. Subtle differences in the direction of the gaze, angle of the
view, distance, light, and other factors mean that even if someone is
looking at the same object multiple times, the object creates innumerable
impressions on the retina. Every time a person's eyes move, the pattern of
neural activity changes while the perception of the object stays the same.
This stability is called invariance, and is fundamental to a person's
ability to recognize objects. Although it feels effortless, it is a
central challenge in computational neuroscience, says MIT's James DiCarlo.
"We want to understand how our brains acquire invariance and how we might
incorporate it into computer vision systems," DiCarlo says. One possible
explanation is that our eyes tend to move rapidly, about three times per
second, while physical objects usually change more slowly, which means
differing patterns of activity in rapid succession often reflect different
images of the same object. MIT researchers are working to understand the
brain mechanisms behind this effect. In their latest study, the
researchers had monkeys watch an altered world while recording neuron
patterns in the inferior temporal (IT) cortex, a high-level visual brain
area where object invariance is believed to occur. After the monkeys
looked at these altered worlds for a while, the IT neurons became confused.
The researchers are now testing this concept using computer vision systems
viewing real-world videos.
Climate Computer Modeling Heats Up
Climate researchers are using a $1.4 million award from the National
Science Foundation (NSF) to generate petascale computer models that depict
detailed climate dynamics while establishing the foundation for the next
generation of complex climate models. The researchers say the availability
of petascale computing promises a golden opportunity for climate
researchers to advance Earth system science and help improve the quality of
life on the planet. "The limiting factor to more reliable climate
predictions at higher resolution is not scientific ideas, but computational
capacity to implement those ideas," says NSF's Jay Fein. "This project is
an important step forward in providing the most useful scientifically-based
climate change information to society for adapting to climate change." For
example, the increase in computing capabilities has enabled Ben Kirtman, a
researcher at the University of Miami Rosenstiel School of Marine and
Atmospheric Science, to develop a novel weather and climate modeling
strategy designed to isolate the interactions between weather and climate.
"The information from this project will serve as a cornerstone for
petascale computing in our field, and help to advance the study of the
interactions between weather and climate phenomena on a global scale,"
Kirtman says.
|