Ronke Ekwensi, chief data officer for T-Mobile, can explain her job pretty easily, at least from a 30,000-foot perspective.
“The way I think about my role as CDO is to really focus on extracting value from data,” he said recently in the East Village on Northeastern's Boston campus.
This description is simple enough, but it's in the fine details of analyzing and managing this data for one of the world's leading wireless service providers that the complexities of the job arise.
One topic that has been in the spotlight until recently has been artificial intelligence and how best to integrate it into T-Mobile's operations.
Ekwensi, a Northeastern graduate, was one of more than a dozen executives who spoke during a business conference hosted and organized by Northeastern's Institute for Experiential AI. The theme of the event was “Leader with AI in charge.”
Leaders from companies such as Google, McDonald's and Intitut were among the speakers.
Ekwensi, who earned a PhD in data policy from the university in 2022, is well aware of the important relationship between data and artificial intelligence. The data is it is often called the “fuel” or “soul” of artificial intelligence, and for good reason. Without access to any kind of data, your favorite AI-powered chatbot simply wouldn't be able to function.
To implement and develop these systems in an equitable manner, Ekwensi outlined a number of parameters she is considering.
Paramount to the use of these systems is that users have a shared understanding of what it means to implement and use artificial intelligence responsibly.
“The bad news is that there is no common definition of responsible AI, but what we do have are common thematic elements,” he said. “These common themes define the areas of risk associated with artificial intelligence.”
So what are these common thematic elements? They include ethics, fairness, accountability, security and transparency, Ekwensi explained.
“It's really about codifying an organization's value, language that puts up guardrails because you can't think of every risk intrusion at every point,” he said. “What you need to do is create guardrails that help your organization develop and use AI responsibly.”
There are certainly challenges in providing clear guidelines and directions. AI technologies like large language models are often referred to as “black boxes” because we largely don't understand much of how they work, he said.
“What generally keeps me going with the LLM is that there's an ossification of explanation that really concerns me,” he said. “How we think about it and how we deal with it will be a challenge. I don't have any answers right now.”
When it comes to actually using AI, it's important to be very clear and intentional about exactly what task you want the model to complete, he said. You run into problems when you try to apply AI to a series of tasks randomly. It is better to start with a specific task.
“When you think about a well-defined use case, you have the opportunity to make decisions about the technique of the model. You have the opportunity to make decisions about what data you need for a particular model,” Ekwensi said.
People play and will continue to play an integral role as technology matures and improves. In addition, data management and protection are also critical. He highlighted the use of encryption technologies and other tools that can be used to keep customer data secure and compliant with regulatory requirements.
Northeastern researchers are certainly doing their part here as part of the TN Experiential Institute. In a series of lightning talks, some faculty members of the institute talked about their work.
Eugene Tunik is the director of AI + Health at the institute. Tunik explores ways in which AI can be used in the clinical setting and become a “member of the clinical decision support team.”
Artificial intelligence has unlocked advances in a number of healthcare settings. From home care and early disease detection to accelerating new drug development and educating medical students, AI is advancing the field at a rapid pace.
“We have a very large team in the AI + Health discipline of postdocs, researchers, scientists, and faculty who are spread out between Boston and Portland, Charlotte, and other campuses,” Tunik said.
Sam Scarpinodirector of AI + Life Sciences, said his team is focused on using data science, artificial intelligence and advanced analytics to help solve some of the medical industry's most vexing problems affecting patients and physicians.
In cases of pandemics, he demonstrated ways in which technology and data could be used to minimize disruption.
“We could leverage high-resolution data mobility to improve contact tracing methods to keep schools open, to keep businesses open, similar to what they're doing in South Korea,” he said.
In addition, his team focuses on “developing ethical and responsible frameworks to ensure that data is used for the purpose for which it was collected.”
“Often data is collected for a specific use case, but it sits on a shelf collecting dust, which in many cases violates the person whose data is contained there, as much as a potential invasion of privacy,” he said.
Ardeshir Contractor discussed his involvement as research director for the AI for Climate and Sustainability (AI4CaS) focus area.
The contractor said the team is working with institute members and industry partners to see how they are addressing the climate crisis.
“Our work in artificial intelligence has been used for large climate models,” he said.
“Our approach is to try to combine specialties,” he added. “We bring computer science. We bring engineering data. I think the most important thing is that we want to emphasize resilience.”
Cesareo Contreras is a reporter for Northeastern Global News. Email him at c.contreras@northeastern.edu. Follow him on X/Twitter @cesareo_r and Threads @cesareor.