Skip to main content
Scientific American

April 2, 2025

7 min read

Google Logo Add Us On GoogleAdd SciAm

The Safety Scientists Forging a More Secure Tomorrow

A peek behind the scenes at UL Research Institutes, where researchers are taking on the biggest threats to our environment, public health and digital safety

Rachel Hartigan, Michael Greshko

Stock image of a digital scape

The digital ecosystem is complex, interconnected—and not always trustworthy.

Carloscastilla/Alamy Stock Photo

Scientific American Custom Media LogoUL Research Institutes logo

This series was created for UL Research Institutes by Scientific American Custom Media, a division separate from the magazine’s board of editors.

Join Our Community of Science Lovers!

“When more people make a breakthrough discovery or build a coalition for progress, it helps advance a vision of the world in which we all want to live.” That statement of philosophy reflects how UL Research Institutes (ULRI) employees have long approached their work.  

Known for boundary-pushing research, ULRI, formerly known as Underwriters Laboratories, seeks to identify and mitigate threats to the environment, public health and digital safety that are not well addressed elsewhere, and includes institutes focused on electrochemical, digital, chemical and fire hazards. Researchers there pursue innovative projects, often in partnership with distinguished academic and scientific organizations around the world. Here, we take a look at the minds behind the science at three ULRI institutes. 

A long hunt for safer batteries 

In 1999 American astronauts wanted to bring a digital camcorder on a space shuttle mission. But the camcorder was powered by a lithium-ion battery, a relatively new technology that hadn’t yet been approved for human space flight. To ensure the device wouldn’t introduce unknown hazards to the mission, Judy Jeevarajan, then a research scientist at NASA’s Johnson Space Center, ran rigorous tests on the battery to make sure it was safe. In the process, she became the first person to certify a lithium-ion battery for human space flight. 

A quarter-century later, lithium-ion batteries are everywhere, from our ubiquitous phones to implanted medical devices to satellites blinking at us in the night sky. And Jeevarajan, now vice president and executive director of the Electrochemical Safety Research Institute (ESRI) at ULRI, continues to lead the charge toward making them safe, wherever they’re used. 

As much as she relished her time as a senior scientist at NASA, Jeevarajan joined ULRI in 2015, eager to embrace the organization’s broader safety goals. In 2021 she was tapped to lead ESRI, newly created with the mission to “advance safer energy storage through science.” She quickly built the institute to a staff of 21 chemical engineers, electrical engineers, fire engineering scientists, materials scientists, computer-modeling experts and other specialists. 

Located in a University of Houston technology park, the team collaborates with researchers in academia and industry to understand the workings of different energy-storage systems—particularly advanced batteries and hydrogen—including what may cause them to break down and when they may become dangerous. The question that drives ESRI’s work, says Jeevarajan, is “what can we do to make the world a safer place, especially with respect to energy . . . and sustainability?”  

It’s a question of particular pertinence now, as battery-powered devices are crucial in the move toward renewable energy. Lithium-ion batteries—light, powerful, rechargeable—are the most widely used. But if improperly manufactured or managed, they are subject to uncontrollable overheating known as thermal runaway, which can lead to disastrous fires, smoke and chemical emissions. 

Newer energy-storage alternatives could help mitigate these threats, says Dhevathi Rajan Rajagopalan Kannan, a research scientist at ESRI who is in charge of that project. Among the alternatives: sodium-ion batteries which, given the abundance of sodium, are cheaper and more sustainable to produce. “What I’m trying to understand is whether the sodium-ion battery that is being used, or that is available, is safe or not,” he says. And it’s a race against time: “That is a fundamental understanding we are trying to get to before it gets more commercialized and mass-produced and adopted within the U.S.” 

Read More From This Report

New Horizons in Safety Science

    Safety Science Expands to Meet Global Threats

    A Quest to Stop Fires Before They Turn Lethal

    The New Materials that Could Ease Climate Impacts

    The Safety Scientists Forging a More Secure Tomorrow

View the Report

To that end, Kannan is performing a series of experiments on a small set of sodium-ion commercial cells, including discharging and charging the batteries and subjecting them to off-nominal electrical and thermal conditions to see if combustion, explosion or “any kind of thermal runaway” results. So far, the results show that they are not very different from lithium-ion batteries, he says: “At ESRI, we plan to test sodium-ion cells from various manufacturers to assimilate more data on their performance and safety.” 

Under Jeevarajan’s leadership, ESRI is marshaling its resources to address the public’s immediate needs and get a jump on the future. For example, Jeevarajan points to the institute’s work relevant to fast charging stations, which are already available to drivers of electric vehicles. “People don’t fully understand what happens inside a cell when you do the fast charge,” she says. “We open up the cells and study them analytically, spectroscopically and so on, to understand what changes are going on.”  

Jeevarajan’s ultimate goal is to be proactive: ESRI is also studying the safety aspects of using hydrogen as fuel. “If we can get ahead of the game,” she says, “we can help with setting up standards and regulations.” And that will make everyone safer. —Rachel Hartigan 

Safety research at the speed of artificial intelligence

Before Jill Crisman joined ULRI’s Digital Safety Research Institute (DSRI) in 2022 as its executive director and vice president, she spent three decades leading artificial-intelligence efforts for the U.S. government as well as private and academic institutions. “I can remember the day when the digital ecosystem was first set up,” Crisman says—and she has watched it evolve from “a trusted place” to one beset by scams, disinformation and cybercrime. Her goal for DSRI is to help restore the digital realm’s trustworthiness, in part by ensuring that new and emerging technologies “are deployed safely,” Crisman says. Among the fastest-growing and trickiest of these safety challenges: how to “align” the behavior of AI systems called large language models (LLMs).  

Once trained on gigantic sets of data scraped from the Internet, LLMs are remarkably proficient at predicting the strings of words or synthesizing the images and videos that most likely respond to a given prompt. At present, these systems can’t be said to reason; LLMs’ outputs can change markedly in response to seemingly minor changes to the original prompt. Even so, these systems’ conversational and generative skills have attracted more than half a trillion dollars in investment and hundreds of millions of users in just the past few years. Many technology companies deem LLMs as essential to the future of computing.  

UL Research Institutes Advancing Safety Innovations ad

But as these systems become more powerful, the potential risks they pose may also increase. If improperly trained or without the proper guardrails in place, an LLM can generate all sorts of unwanted and unsafe outputs, such as hate speech or hallucinated claims that innocent people committed heinous crimes. What’s more, creative hackers are quickly poking holes in existing safety systems: In September 2024 a hacker prodded OpenAI’s LLM product ChatGPT into creating a fantasy story that contained within it detailed instructions for making fertilizer bombs.  

The field is moving rapidly—and government efforts to provide frameworks for LLM safety are still in their infancy. In October 2023 President Joe Biden issued a landmark executive order focused on the safety of LLMs and other so-called “generative” AI systems. Some groups created by this executive order, such as an AI safety board within the U.S. Department of Homeland Security, came online in 2024. 

So just as Underwriters Laboratories developed its own safety standards for airplanes during the Wild West aviation industry of the early 1920s, DSRI is moving swiftly to conduct its own AI safety research in collaboration with other enterprises. In July 2024 DSRI announced a partnership with the nonprofit Allen Institute for Artificial Intelligence (Ai2) to develop safety evaluation practices for LLMs, starting with Ai2’s very own Open Language Model (OLMo). 

Multiple monitors, a laptop and a phone

DSRI’s digital safety research aims to restore trust in the digital ecosystem.

boonchai wedmakawand/Moment/Getty Images

In August 2024 the two institutes partnered to stage a challenge at the major hackers’ convention DEF CON, in which teams attempted to poke holes in a model card that described OLMo’s capabilities, safety features and performance against benchmark tests in intentionally lofty language meant to goad the event’s hackers. The contest yielded 200 flaw reports, including a set of previously unaccounted-for prompts that could “jailbreak” OLMo—alter it to allow unauthorized modifications or software installation—and bypass its existing guardrails. 

Eventually, this work could yield well-defined tests for what constitutes safe LLM behavior, as well as a framework in which hackers can flag any flaws they discover for an LLM’s creator. —Michael Greshko 

Keeping air breathable, indoors and out 

When it comes to the air we breathe, now is a critical time in environmental history. So says Marilyn Black, a public health scientist and former head of the Chemical Insights Research Institute (CIRI) at ULRI. Both outdoor and indoor air are at risk, she adds. “Wildfires are striking in urban interface areas with increased frequency in places like Hawaii and Canada; 3D printers are growing in popularity in school systems with unknown health consequences; and building materials are impacting the built environment and occupant health.”

CIRI is building on Black’s decades of work in environmental science. In 1989 Black founded Air Quality Sciences, a research company focused on measuring indoor pollution and its effects. Eleven years later, she created GREENGUARD Environmental Institute, a nonprofit that certifies chemically safe products. In 2011 Underwriters Laboratories, as ULRI was then called, bought the two organizations and hired Black. Soon, she added CIRI to her portfolio to expand on nonprofit research and outreach efforts on environmental exposures. Since then CIRI has grown to include 25 research and amplification specialists and a 3,000-square-foot lab space in Marietta, Georgia, equipped with state-of-the-art analytical technology.

Black stresses that CIRI doesn’t do research for its own sake, but as a springboard for action. That approach is fundamental to ULRI overall, says Christopher J. Cramer, ULRI’s interim president and chief research officer. “We want to provide tools to mitigate risk, and we want to modify behavior by providing convincing evidence of how risks can be avoided or mitigated.” 

CIRI prioritizes research into environmental problems that are particularly widespread. For instance, wildfire smoke can generate air pollution more than 600 miles away from the initial blaze—think of Canadian wildfires darkening New York skies in recent summers. The institute’s scientific research into the impact that has on indoor air quality led to practical guidelines for consumers on how to build a DIY box-fan air filter, especially important when there’s a run on home-air purifiers. 

A man surrounded by four children, looking at a scientific experiment

3D printers, widely used in education, generate vapors and particulates.

LanaStock/iStock/Getty Images Plus

CIRI also examines the effects of new technologies that are rapidly spreading. 3D printers are a prime example. Eagerly embraced as an educational tool, the printers were incorporated into classrooms, libraries and community centers, often with little forethought about potential hazards. CIRI identified exposure risks associated with vapors and particulates generated during the printers’ operation—and proposed mitigation strategies, such as better ventilation. The institute also made sure schools were informed about their findings. 

In the next year or two, researchers at CIRI plan to investigate how air quality is affected by two ramifications of climate change: extreme temperatures and the construction of resilient and more energy-efficient buildings. New chemical and biological assessment tools will be front-and-center, Black says, especially for “identifying human risks and measuring biomarkers to explain why exposure to certain chemicals leads to adverse human-health responses.” 

This research could affect millions of people. That’s intentional, says Cramer: “We want to make the greatest impact we can.” —Rachel Hartigan 


Explore ULRI’s safety-science research initiatives.

Rachel Hartigan, who writes about history, culture and science, will publish a book about the search for Amelia Earhart in 2026.
Michael Greshko is a freelance science journalist based in Washington, D.C., whose work has appeared in many publications including the New York Times, the Washington Post, Science, Nature and National Geographic, where he worked as a staff writer.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe

Subscribe to Scientific American to learn and share the most exciting discoveries, innovations and ideas shaping our world today.

Subscription PlansGive a Gift Subscription
  • Explore SciAm
  • Latest Issue
  • News
  • Opinion
  • Newsletters
  • Podcasts
  • Games
  • Travel
  • Company
  • About
  • Press Room
  • FAQs
  • Contact Us
  • Standards & Ethics
  • International Editions
  • Advertise
  • More
  • Accessibility
  • Terms of Use
  • Privacy Policy
  • US State Privacy Rights
  • Use of cookies/Do not sell my data
  • Return & Refund Policy

Scientific American is part of Springer Nature, which owns or has commercial relations with thousands of scientific publications (many of them can be found at www.springernature.com/us). Scientific American maintains a strict policy of editorial independence in reporting developments in science to our readers.

© 2026 SCIENTIFIC AMERICAN, A DIVISION OF SPRINGER NATURE AMERICA, INC.
ALL RIGHTS RESERVED.