Written by Noelle Toumey Reetz
Written by Noelle Toumey Reetz
As artificial intelligence (AI) continues to rapidly advance, there are few areas that will not be radically impacted. In fact, it’s already begun.
From education to commerce and medicine, research is playing an increasingly important role in addressing the new societal and technological changes underway.
Experts from across Georgia State are studying the potential impacts as well as expected leaps forward in medical diagnoses and treatment. They are also preparing the next generation of experts for the technical and sociological changes that will drive the workforce of the future.
“These technologies are providing individuals, teams and organizations the potential to reconceive what we do, how we do it, how we collaborate, how we create products and even how we live our lives,” says Arun Rai, Regents’ Professor and Howard S. Starks Distinguished Chair and Director of the Center for Digital Innovation at Georgia State’s Robinson College of Business.
Rai has been working at the forefront of digital innovation for decades and — in addition to teaching and conducting research — his work is focused on helping businesses make sense of rapid changes in digital technologies. His research aims to determine how AI can bring benefits and minimize risks across industry contexts, from education to health care to logistics and supply chains, to high tech and consumer goods.
“What we are seeing is a much more pronounced shift towards AI affecting virtually every sector of our economy. It's affecting all of our major industries and people's living and working. But I think where one gets a more granular understanding, is to shift the discussion from what it's doing at the level of jobs to what it is doing at the level of skills,” explains Rai.
He says that the discussion about how AI will affect the workforce is not static: certain skills will be augmented by AI, certain skills will be displaced, while new skills will be needed for existing and new jobs. Rai says partnerships among industry, academia and government will be crucial to upskilling and reskilling the workforce alongside the rapid development of the technology.
At Georgia State, researchers are using AI and machine learning to study the deepest recesses of the human brain.
Newly published research is finding new ways to produce earlier diagnoses of mental illness and other diseases, including schizophrenia and epilepsy. The developments offer solutions in real-world settings that can aid both patients and medical professionals.
“AI is poised to make a major impact in expanding our understanding of the brain and also in the way we make decisions about how to treat or prevent illness,” says Vince Calhoun, Distinguished University Professor and director of the Tri-Institutional University Center for Translational Research in Neuroimaging and Data Science (TReNDS Center). “The main strength of AI is in synthesizing large amounts of data to help us maximize the information we have available, too much for a person to put together. While it is still early, AI has already helped us to improve our ability to make reliable predictions and to suggest the most informative aspects of the data.”
Calhoun says researchers are harnessing these advances in technology to both better understand and visualize the impact of mental illness on the brain and to make better diagnoses that will lead to more effective treatment options and could eventually allow doctors to stave off some diseases altogether.
A recent study published in JAMA Neurology finds that artificial intelligence models can be trained to interpret routine clinical electroencephalograms (EEGs) with the accuracy equivalent to that of human experts. It is known as Automated Electroencephalogram Interpretation.
Sergey Plis, an associate professor of Computer Science at Georgia State and director of the machine learning core at TReNDS, who worked on the study along with Dr. Calhoun and an international collaborative team of researchers and industry partners, says their findings represent a leap forward toward harnessing AI for clinical use. But these principles and strategies can be expanded to countless applications, including climate change models or space exploration. The same approach could also be used for wide-ranging medical applications including detecting tumors or other conditions.
“What the model can do is greatly reduce the amount of time highly trained clinicians are spending on clear cases, and bring up the possibly controversial cases where radiologists can spend their time focused on red flags instead of sifting through data,” explained Plis.
Recently, another international team of researchers at TReNDS was able to identify brain pattern changes connected to schizophrenia risk in children with subthreshold symptoms using a new hybrid, data-driven method in a study published in the proceedings of the National Academy of Sciences.
In yet another recent study published in Nature Scientific Reports, scientists at the TReNDS Center built a sophisticated computer program that was able to comb through massive amounts of brain imaging data and discover novel patterns differentially linked to autism spectrum, Alzheimer’s disease and schizophrenia. The brain imaging data came from scans using functional magnetic resonance imaging (fMRI), which measures dynamic brain activity by detecting tiny changes in blood flow.
Plis says, like automated mechanical labor during the industrial revolution, artificial intelligence will help to automate cognitive labor.
“There are so many applications where AI can be used, but it’s very hard to predict where it will go,” says Plis. “I think we will automate a lot of tasks that require cognitive load, but automating some tasks will be much harder than we thought initially.”
Recent research by Georgia State Criminologists find that AI-powered facial recognition can lead to increased racial profiling.
Facial Recognition Technology (FRT) is an artificial intelligence–powered technology that tries to confirm the identity of a person from an image.
The study by Georgia State researchers finds that law enforcement agencies that use automated facial recognition disproportionately arrest Black people.
“We believe this results from factors that include the lack of Black faces in the algorithms’ training data sets, a belief that these programs are infallible and a tendency of officers’ own biases to magnify these issues,” says one of the study’s authors Thad Johnson.
Johnson is a former police officer and teaches criminology at Georgia State. He is one of many safety advocates that while acknowledging technology’s potential to improve public safety, is calling for enforceable safeguards to prevent unconstitutional overreaches from racial profiling and false arrests.
As machine learning and artificial intelligence evolve, both curriculums and tools for student success at GSU are developing too.
One example is Georgia State’s AI-enhanced text messaging tool, “Pounce.” The chatbot is nationally recognized for its success at improving student progress and retention rates. Research finds that student performance jumps when classes employ the chatbot to keep them connected. Students get direct text messages about class assignments, academic support and course content and the tool has proved transformative for student success by reducing the average time it takes to earn a degree by almost a full semester.
“We are partnering with MIT (The Massachusetts Institute of Technology), supported by grants by the Axim Collaborative, to design and evaluate an AI tutor for equitable student success in programming courses,” says Rai. “Leveraging generative AI, we are developing a solution for personalized anytime-anywhere tutoring for students which we will evaluate with respect to equitable student learning, internship opportunities and career aspiration.”
Established in spring 2020, Georgia State’s Inspire Center is one of just a handful in the U.S. designated by the National Security Agency and the Department of Homeland Security as a National Center of Academic Excellence in Cyber Defense Research (CAE-R) and a National Center of Academic Excellence in Cyber Defense Education (CAE-CDE). Georgia State is the only university in Georgia to have received both designations.
The Cyber CORPS scholarship service offers nearly $4 million in funding annually that essentially provides a scholarship to students who are studying cybersecurity programs. The scholarships are not limited to computer science – they are also available to students studying information systems. Then after graduation, in exchange for the scholarship, the students work for a federal agency.
Research underway at Georgia State is leveraging the power of artificial intelligence and robotics. Dr. Jonathan Shihao Ji is an associate professor of Computer Science and Director of the Intelligent Systems Lab at Georgia State. A recent grant from the Department of Defense brought ‘Spot’ – a four-legged, dog-like robot – to Georgia State from Boston Dynamics.
“The main applications of our AI and Robotics research are for search and rescue, facilities maintenance, and emergence response, where it’s unsafe to deploy human investigators for the tasks due to unfriendly, hazardous or even hostile situations,” says Dr. Ji. “In such cases, we could deploy a robot (e.g., Spot) for tasking.”
Dr. Ji says Spot has exceptional mobility, allowing it to traverse a wide range of terrains, including rocky and uneven surfaces, stairs or snow. Spot is also equipped with a variety of sensors, such as RGB-D cameras, Infrared, and Lidar, to perceive the surrounding environment. The research taking place at Georgia State involves developing AI models, specifically computer vision and natural language processing algorithms, to enable Spot for things like navigation, object detection and manipulation, with a natural language interface for human-robot interaction. That means the user can direct Spot with natural language instructions in real-time. This is one of the projects that will be enhanced with the addition of a 5-year, $10 million grant from the Department of Defense.
Researchers from units across campus are working at the intersection of cybersecurity and privacy. Raj Sunderraman is the Associate Chair for the Computer Science department who has been working at the leading edge of computer science for more than 20 years.
“We think of ‘Trustworthy AI’ as systems that are inherently accountable, fair, ethical, transparent, reliable, safe, unbiased, secure and privacy protecting,” explains Sunderraman.
He says there are numerous problems that can still arise with AI that experts are working to address.
“AI systems that are deemed untrustworthy include those which exhibit bias towards certain populations, reveal personal data to the public, don’t provide explanations on why certain decisions were made by the system, do psychological or physical harm to the end user, or are not accountable for their actions,” he says.
Most people have been relating with AI for years without even realizing it, such as when you apply for a mortgage or credit card online or get a quote for insurance.
When data is privacy sensitive – for example, financial or healthcare-related information – regulations like HIPPA prevent companies from sharing data or overseeing how the information is handled. Researchers are working to develop techniques and tools to safeguard encrypted data and enable the use of machine learning as a service platform without compromising privacy.
As AI becomes more widely integrated, there are growing calls for both regulation and fairness. Safety advocates question who has access to all the data, where the data comes from and where these developments might lead.
Experts say the advancement of AI is reminiscent of the onset of the internet, there were few guardrails for security or privacy. But, because we are still in the early days, there is a window of opportunity to address some of these issues in a more fundamental way.
A new 5-year, $10 million grant will fund a Department of Defense Center of Excellence in Advanced Computing and Software (COE-ACS) at Georgia State. The principal investigator, Associate Professor of Computer Science Jonathan Ji, says the center is an interdisciplinary research alliance led by Georgia State in collaboration with Duke University and partners from the U.S. Army Research Laboratory.
The research agenda is driven by a collaborative cohort of researchers drawn from multiple disciplines to solve the most critical problems in AI and robotics, particularly, human-robot interaction, VR/AR (Virtual Reality/Augmented Reality), edge computing and trustworthy AI. Education and outreach are also critical components of the center, which will train and employ 12 Ph.D. students and 100 undergraduate students over the life of the initial grant.
Recently the U.S. Senate Judiciary Committee held the first ever hearing on regulating AI technology, and the United Nations also held the first-ever summit on AI regulation where Security Council members urged regulation of the technology to stave off possible misuses and stay ahead of what comes from these rapid advancements.
“We are still working to learn how a lot of AI systems work, and that's the scary part that if you don't address these issues, it’s likely to become much more difficult down the road,” says Sunderraman.
Those who are working to adapt to these rapid changes say they see positive benefits ahead along with the risks, including the democratization of information models.
“We are examining whether open data initiatives can actually end up democratizing these AI innovation processes. Right now, the platform companies have a tremendous advantage with data, like Google for example,” explains Rai. “So, how can we leverage these technologies in smart and creative ways so we can achieve inclusive prosperity and democratize innovation, so we don’t end up with winners and losers.”
As it turns out, AI isn’t the only one learning. We are too.
Illustrations by Tara Jacoby