Monday, January 15, 2018
THURSDAY, OCTOBER 19, 2017
UC Berkeley researchers report challenges, solutions facing artificial intelligence
C Berkeley to join major tech companies in advancing 5G networks
Electrical Engineering and Computer Sciences faculty members and researchers from UC Berkeley’s Real-Time Intelligent Secure Execution Lab, or RISELab, released a report Monday outlining challenges facing the progress of artificial intelligence technologies as well as some ways to address them.
RISELab, which launched in January, is the most recent installment of the campus computer science division’s ongoing tradition of five-year collaborative research labs. This particular lab is focused on machine learning, security and structuring computers, according to RISELab member and campus professor of computer science David Patterson.
Despite its youth, RISELab has many expert faculty members and leading researchers who have conducted research in computer science for most of their careers. Patterson, for instance, has dedicated his 40-year career to computer science development and has participated in about eight labs on campus.
In their report, consisting of about six months of work, RISELab’s researchers defined several major challenges facing technologies with artificial intelligence, or AI. One challenge lies within the hardware of computers as computer processor performance declines, according to EECS professor and RISELab director Ion Stoica.
“The amount of data that we have is increasing exponentially, but the capabilities of processors aren’t as fast as they should be,” Stoica said.
The “laws of physics,” RISELab co-founder and campus assistant EECS professor Joseph Gonzalez added, will not allow processors to be made faster, and there is a need for more resources to meet the demands of computation. According to Patterson, the decreasing efficiency of computer processors impedes AI development, and professionals and researchers will have to find a way to use machine learning to process larger amounts of data.
Gonzalez also identified AI security as a major concern, citing the growth of “big-data technology,” or technology that deals with unusually large sets of data. Stoica added that although using more data in computers is better for creating personalized accommodations — such as how Netflix uses big data to suggest movies for users — there is also an impact on the user’s privacy.
“As AI becomes more and more important, and has more impact on our lives, we need the systems to be not only intelligent, but also robust, explainable and secure,” Stoica said.
The possible solutions to combat these concerns listed in the report include optimizing processors to serve only one particular task, which would increase their productivity and speed, according to Stoica.
EECS and statistics campus professor Michael Jordan commented on the relevance of the report’s conclusions, stating that RISELab is part of a worldwide effort to improve AI development.
“We’re gonna see examples of artificial intelligence in all kinds of products, and without proper care, we would be compromising personal security,” Patterson said. “We should build things that work well, that people can enjoy and that are secure.”
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment