“When more people make a breakthrough discovery or build a coalition for progress, it helps advance a vision of the world in which we all want to live.” That statement of philosophy reflects how UL Research Institutes (ULRI) employees have long approached their work.
Known for boundary-pushing research, ULRI, formerly known as Underwriters Laboratories, seeks to identify and mitigate threats to the environment, public health and digital safety that are not well addressed elsewhere, and includes institutes focused on electrochemical, digital, chemical and fire hazards. Researchers there pursue innovative projects, often in partnership with distinguished academic and scientific organizations around the world. Here, we take a look at the minds behind the science at three ULRI institutes.
A long hunt for safer batteries
In 1999 American astronauts wanted to bring a digital camcorder on a space shuttle mission. But the camcorder was powered by a lithium-ion battery, a relatively new technology that hadn’t yet been approved for human space flight. To ensure the device wouldn’t introduce unknown hazards to the mission, Judy Jeevarajan, then a research scientist at NASA’s Johnson Space Center, ran rigorous tests on the battery to make sure it was safe. In the process, she became the first person to certify a lithium-ion battery for human space flight.
A quarter-century later, lithium-ion batteries are everywhere, from our ubiquitous phones to implanted medical devices to satellites blinking at us in the night sky. And Jeevarajan, now vice president and executive director of the Electrochemical Safety Research Institute (ESRI) at ULRI, continues to lead the charge toward making them safe, wherever they’re used.
As much as she relished her time as a senior scientist at NASA, Jeevarajan joined ULRI in 2015, eager to embrace the organization’s broader safety goals. In 2021 she was tapped to lead ESRI, newly created with the mission to “advance safer energy storage through science.” She quickly built the institute to a staff of 21 chemical engineers, electrical engineers, fire engineering scientists, materials scientists, computer-modeling experts and other specialists.
Located in a University of Houston technology park, the team collaborates with researchers in academia and industry to understand the workings of different energy-storage systems—particularly advanced batteries and hydrogen—including what may cause them to break down and when they may become dangerous. The question that drives ESRI’s work, says Jeevarajan, is “what can we do to make the world a safer place, especially with respect to energy . . . and sustainability?”
It’s a question of particular pertinence now, as battery-powered devices are crucial in the move toward renewable energy. Lithium-ion batteries—light, powerful, rechargeable—are the most widely used. But if improperly manufactured or managed, they are subject to uncontrollable overheating known as thermal runaway, which can lead to disastrous fires, smoke and chemical emissions.
Newer energy-storage alternatives could help mitigate these threats, says Dhevathi Rajan Rajagopalan Kannan, a research scientist at ESRI who is in charge of that project. Among the alternatives: sodium-ion batteries which, given the abundance of sodium, are cheaper and more sustainable to produce. “What I’m trying to understand is whether the sodium-ion battery that is being used, or that is available, is safe or not,” he says. And it’s a race against time: “That is a fundamental understanding we are trying to get to before it gets more commercialized and mass-produced and adopted within the U.S.”
To that end, Kannan is performing a series of experiments on a small set of sodium-ion commercial cells, including discharging and charging the batteries and subjecting them to off-nominal electrical and thermal conditions to see if combustion, explosion or “any kind of thermal runaway” results. So far, the results show that they are not very different from lithium-ion batteries, he says: “At ESRI, we plan to test sodium-ion cells from various manufacturers to assimilate more data on their performance and safety.”
Under Jeevarajan’s leadership, ESRI is marshaling its resources to address the public’s immediate needs and get a jump on the future. For example, Jeevarajan points to the institute’s work relevant to fast charging stations, which are already available to drivers of electric vehicles. “People don’t fully understand what happens inside a cell when you do the fast charge,” she says. “We open up the cells and study them analytically, spectroscopically and so on, to understand what changes are going on.”
Jeevarajan’s ultimate goal is to be proactive: ESRI is also studying the safety aspects of using hydrogen as fuel. “If we can get ahead of the game,” she says, “we can help with setting up standards and regulations.” And that will make everyone safer. —Rachel Hartigan
Safety research at the speed of artificial intelligence
Before Jill Crisman joined ULRI’s Digital Safety Research Institute (DSRI) in 2022 as its executive director and vice president, she spent three decades leading artificial-intelligence efforts for the U.S. government as well as private and academic institutions. “I can remember the day when the digital ecosystem was first set up,” Crisman says—and she has watched it evolve from “a trusted place” to one beset by scams, disinformation and cybercrime. Her goal for DSRI is to help restore the digital realm’s trustworthiness, in part by ensuring that new and emerging technologies “are deployed safely,” Crisman says. Among the fastest-growing and trickiest of these safety challenges: how to “align” the behavior of AI systems called large language models (LLMs).
Once trained on gigantic sets of data scraped from the Internet, LLMs are remarkably proficient at predicting the strings of words or synthesizing the images and videos that most likely respond to a given prompt. At present, these systems can’t be said to reason; LLMs’ outputs can change markedly in response to seemingly minor changes to the original prompt. Even so, these systems’ conversational and generative skills have attracted more than half a trillion dollars in investment and hundreds of millions of users in just the past few years. Many technology companies deem LLMs as essential to the future of computing.

But as these systems become more powerful, the potential risks they pose may also increase. If improperly trained or without the proper guardrails in place, an LLM can generate all sorts of unwanted and unsafe outputs, such as hate speech or hallucinated claims that innocent people committed heinous crimes. What’s more, creative hackers are quickly poking holes in existing safety systems: In September 2024 a hacker prodded OpenAI’s LLM product ChatGPT into creating a fantasy story that contained within it detailed instructions for making fertilizer bombs.
The field is moving rapidly—and government efforts to provide frameworks for LLM safety are still in their infancy. In October 2023 President Joe Biden issued a landmark executive order focused on the safety of LLMs and other so-called “generative” AI systems. Some groups created by this executive order, such as an AI safety board within the U.S. Department of Homeland Security, came online in 2024.
So just as Underwriters Laboratories developed its own safety standards for airplanes during the Wild West aviation industry of the early 1920s, DSRI is moving swiftly to conduct its own AI safety research in collaboration with other enterprises. In July 2024 DSRI announced a partnership with the nonprofit Allen Institute for Artificial Intelligence (Ai2) to develop safety evaluation practices for LLMs, starting with Ai2’s very own Open Language Model (OLMo).

DSRI’s digital safety research aims to restore trust in the digital ecosystem.
boonchai wedmakawand/Moment/Getty Images
In August 2024 the two institutes partnered to stage a challenge at the major hackers’ convention DEF CON, in which teams attempted to poke holes in a model card that described OLMo’s capabilities, safety features and performance against benchmark tests in intentionally lofty language meant to goad the event’s hackers. The contest yielded 200 flaw reports, including a set of previously unaccounted-for prompts that could “jailbreak” OLMo—alter it to allow unauthorized modifications or software installation—and bypass its existing guardrails.
Eventually, this work could yield well-defined tests for what constitutes safe LLM behavior, as well as a framework in which hackers can flag any flaws they discover for an LLM’s creator. —Michael Greshko
Keeping air breathable, indoors and out
When it comes to the air we breathe, now is a critical time in environmental history. So says Marilyn Black, a public health scientist and former head of the Chemical Insights Research Institute (CIRI) at ULRI. Both outdoor and indoor air are at risk, she adds. “Wildfires are striking in urban interface areas with increased frequency in places like Hawaii and Canada; 3D printers are growing in popularity in school systems with unknown health consequences; and building materials are impacting the built environment and occupant health.”
CIRI is building on Black’s decades of work in environmental science. In 1989 Black founded Air Quality Sciences, a research company focused on measuring indoor pollution and its effects. Eleven years later, she created GREENGUARD Environmental Institute, a nonprofit that certifies chemically safe products. In 2011 Underwriters Laboratories, as ULRI was then called, bought the two organizations and hired Black. Soon, she added CIRI to her portfolio to expand on nonprofit research and outreach efforts on environmental exposures. Since then CIRI has grown to include 25 research and amplification specialists and a 3,000-square-foot lab space in Marietta, Georgia, equipped with state-of-the-art analytical technology.
Black stresses that CIRI doesn’t do research for its own sake, but as a springboard for action. That approach is fundamental to ULRI overall, says Christopher J. Cramer, ULRI’s interim president and chief research officer. “We want to provide tools to mitigate risk, and we want to modify behavior by providing convincing evidence of how risks can be avoided or mitigated.”
CIRI prioritizes research into environmental problems that are particularly widespread. For instance, wildfire smoke can generate air pollution more than 600 miles away from the initial blaze—think of Canadian wildfires darkening New York skies in recent summers. The institute’s scientific research into the impact that has on indoor air quality led to practical guidelines for consumers on how to build a DIY box-fan air filter, especially important when there’s a run on home-air purifiers.

3D printers, widely used in education, generate vapors and particulates.
LanaStock/iStock/Getty Images Plus
CIRI also examines the effects of new technologies that are rapidly spreading. 3D printers are a prime example. Eagerly embraced as an educational tool, the printers were incorporated into classrooms, libraries and community centers, often with little forethought about potential hazards. CIRI identified exposure risks associated with vapors and particulates generated during the printers’ operation—and proposed mitigation strategies, such as better ventilation. The institute also made sure schools were informed about their findings.
In the next year or two, researchers at CIRI plan to investigate how air quality is affected by two ramifications of climate change: extreme temperatures and the construction of resilient and more energy-efficient buildings. New chemical and biological assessment tools will be front-and-center, Black says, especially for “identifying human risks and measuring biomarkers to explain why exposure to certain chemicals leads to adverse human-health responses.”
This research could affect millions of people. That’s intentional, says Cramer: “We want to make the greatest impact we can.” —Rachel Hartigan



