The opinions espoused in the blog posts below belong to myself alone. They are not representative of the viewpoints of either my current or former employer(s), nor do they represent the opinions or findings of any current or former research funders.
As we begin 2026, it is not hyperbole to say that AI is everywhere. Currently, someone can wake up to an alarm on their AI-embedded smart device, drive to work whilst conversing with a built-in AI agent, conduct their work with the help of one or more AI models, and cap off the day with an AI-generated podcast of their choosing. Computers are sold with "AI capability" (a fuzzy term encompassing anything from built-in AI assistants like Microsoft's Copilot to high-powered CPU and GPU rigs for those interested in building AI models), AI assistants are standard offerings in cell phones, and even non-communication devices like refrigerators are being infused with "intelligence." As a result many of us interact with AI on (likely) a daily basis whether or not we actually seek it out.
The rapid proliferation of AI through every conceivable corner of our lives is driving a nearly-unprecedented market - creating massive wealth at the cost of our environment and potentially even ourselves. Siphoning up massive amounts of energy and water, AI data centers are exacerbating the very climate challenges AI has been heralded to help solve. Obviously, the energy and water consumption of data centers is not a new issue, but the AI-driven load growth is pushing the industry far beyond the previous norms. Data centers are popping up in new, and not always welcoming locations, further stressing resource-scarce areas like the American West, where Wyoming will soon host a data center that will use more power than every human resident combined. As data centers propagate across the world, even those who don't use AI may soon find themselves unable to escape its omnipresence.
For better or worse, whether you like it (or use it) or not, AI is here and impacting all of our lives. At least there are real benefits that could be afforded by AI - for example, AI has been used to find a cure for a rare disease and predict protein structures. However, the headlong rush to build AI into anything and everything should concern everyone, even AI optimists. The current regulatory environment is globally lax and in some cases, like the United States, actively hostile to imposing any restrictions on development or requirements for oversight. The lack of circumspection and discussion in the engineering community about the risks associated with AI - particularly the societal risks beyond the issues associated with energy/water usage - is disheartening. Certainly these concerns are expressed in some communities (particularly in philosophy and AI ethics circles), but the extent to which engineers and everyday users, who comprise at least some of the driving force behind deployments, see and/or engage in serious discussion about the significant dangers of AI is relatively limited.
One of the biggest concerns with AI, particularly generative AI, is the ease with which it is able to impose and/or exacerbate online polarization and echo chambers. It is already difficult for individuals of different political persuasions and cultural identities to occupy the same digital spaces: recommender systems and content feed algorithms all but ensure that users are fed a continuously-narrowing stream of content tailored to their interests. Thus, those with different interests may experience completely separate virtual environments. The danger in this is that, without the shared descriptive understanding of reality derived from common experience, it becomes more and more difficult to debate normative policies. That is, the democratic process is degraded as participants (citizens and lawmakers) no longer agree on even the foundation or need for policies, let alone the policies themselves.
Essentially, generative AI makes it easy to produce ultra-targeted propaganda and flood the internet with misleading information, at the same time. AI-facilitated misinformation floods have already wrought havoc in the aftermath of Hurricane Helene and the run-up to the 2024 U.S. election. Targeting propaganda to individual users, made possible through the collaboration of massive social media datasets and pathetic "safeguards" on AI models, could potentially manipulate opinions and actions (e.g., votes) on the individual level - potentially without notice. Targeted misinformation campaigns could be much more damaging than whole-scale floods precisely because they can go undetected, shaping opinions without the victims knowing. This reduction of a human to a collection of manipulable data is one of Arendt's hallmarks of totalitarianism, under which humans can be totally controlled without actual violence or active coercion. Simply put, AI fails to politically liberate.
There is significant interest in popular media around the impact of AI on the labor market, albeit with mixed messaging. On one hand, AI optimists tout the benefits of AI augmentation for increasing productivity and allowing humans to focus on creative tasks. On the other, AI skeptics raise concerns around under- and unemployment. Even if AI is put to use for increasing productivity, what is the effect on human labor? Will AI augmentation really "liberate," in the sense of lowering workload and/or increasing the value of human labor? Or is it more likely that the use of AI will lead to people simply doing more work without additional compensation, thus devaluing labor? This question mirrors the uncertainties around AI's energy use. As more efficient AI chips roll out, will the net effect truly be that AI companies use less energy, or will they simply build larger models? While the net impacts of AI on labor remain to be seen, there is a good argument that AI will (barring systemic reorganization of labor) fail to liberate labor.
I am not advocating for a ban on AI development. Considering the capital to be gained (for developers) from AI, and its potential as a weapon, the incentive for clandestine development prohibits any attempt to ban it. Banning AI development also discounts the very real benefits that could be realized with AI. What is needed are global standards for development practices and safeguards that respond to both the technical and societal risks associated with AI, and provide a pathway to responsible development and use. This needs a large-scale, dedicated research program to understand the impacts of AI on the individual and societal levels, what effective use of AI looks like, and where/how safeguards can be constructed to mitigate risk. In the meantime, it is our responsibility to use AI appropriately and remain cognizant of its consequences.
Like many other people, I have found myself engrossed in the new (at least, when this is being written) HBO series The Last of Us. It's an excellent adaptation (I'm told) of a fantastic video game. If you're reading this and in a position to watch the show, I highly recommend it (with the caveat that I have only, as yet, seen through Episode 2) as a well-done take on life in a post-infection-borne apocalypse. As much as I love the story, what really struck me was a specific scene during Episode 2. In this scene, Joel and Tess are leading Ellie, their young and unexpected ward, through a derelict museum in the center of Boston. They anticipate, of course, "infected" inside, but are soon confronted with a fearful entity that remains nameless and formless until it is revealed to the viewer.
The reason this scene interested me is the manner in which Joel is able to communicate the severity (e.g., risk) of their predicament to Ellie, who was at that point relatively naive to the dangers outside their Quarantine Zone. Typical risk communication, particularly to those with less subject-relevant knowledge, clearly explains what is the cause of the risk, what the potential consequences are, and what the course of action is. In this scene, however, Ellie is able to understand the risk well enough to finally follow Joel's lead, despite Joel only saying that silence, not quiet, is required from then on.
Why did this work? Beyond some standard excuses relating to on-screen convenience and drama, is there a real-world question that can be pursued here? Is there a difference between effective risk communication strategies in strategic and tactical contexts?
In this case, "strategic" might refer to those situations wherein there is:
Time available to communicate without compromising other mission objectives
A decision or action (generally: an executable) that depends on the information being communicated
On the other hand, "tactical" situations may have contrasting features:
Limited time available, or communication may compromise other mission objectives
Risk being communicated may be secondary to an apparent (and thus, unnecessary to communicate) risk
I think this is something to consider - that the context in which the communication occurs can promote varying characteristics of effective risk communication. Communicating risk under a strategic scenario, when there is time for formal communication and dialogue, will be more similar to traditionally-understood practices in risk communication. However, communicating in tactical scenarios, where expediency is demanded and additional risks must be actively managed, may look starkly different while still being effective.
I was hoping to have more thoughts to put down on the page today, but writing my dissertation has to take precedence over this. Maybe I'll edit this in the future, or maybe I'll run into a paper that already puts forward all these ideas and can rest easy knowing it's covered.
As I begin the (hopefully) final year of my Ph.D., I am thinking more and more about if and how I can be an effective instructor for future engineering students. I have been a Teaching Assistant for the past 3 years, helping students at both the undergraduate and graduate levels learn the fundamentals of reliability engineering. Prior to graduate school, I volunteered with the Training Department to instruct new engineering hires in the fundamentals of nuclear and electrical engineering; I've also given lectures to high school and middle school audiences. I feel that I have been able to develop and hone an effective pedagogical style that works well for me - what I call a "humanized conversational" style, where the students recognize that I'm not a genius deigning to impart my knowledge on mere mortals, but a lifelong learner that is guiding them and (sometimes) learning alongside them. However, I have to wonder if this style is truly effective and as impactful as I hope it is.
I am very lucky to have a plethora of (mostly great) examples to learn from in my undergraduate and graduate professors, and even more so to participate in the University of Maryland Future Faculty Program. That program seeks to train the aspects of being a faculty member that are not taught in a typical STEM graduate education: how to design, teach and evaluate effective courses, designing and pitching research projects, and navigating the application and tenure processes, among other topics. I've already applied to two faculty positions for the 2023 cycle, and anticipate applying to many more as they open up (particularly in the southeast U.S. near my family). As part of this process, I have had to reflect honestly and openly on my strengths, goals and abilities as an instructor. What courses do I want to take over? What new courses can I design to enrich the learning experiences of the students?
I guess most professors struggle with these questions at some point(s) in their careers, but I find it's been a revealing experience to think about my goals and abilities before I really begin my instructional career. I'm going to be a TA again this semester (Fall 2022), and I'm excited to incorporate more of the aspects I've discovered in self-reflection and through the Future Faculty Program. I hope that by incorporating elements of Universal Design for Learning (UDL) and continuing to tweak my pedagogical style will show up in at least qualitative improvements in the students' reception and retention of the material, and ideally a quantitative impact in homework and examination grades.
I had the occasion recently to delve into the true-crime case that is re-enthralling the United States, if not the world. HBO's The Staircase is the story of the tragic death of Kathleen Peterson and the trial of her husband Michael Peterson, accused of killing her. One of the principal aspects of the prosecution's case was evidence processed through Bloodstain Pattern Analysis (BPA). While watching the series, my sister commented that it was amazing that you could figure out so much through BPA and we both marveled at this seemingly-valid science being used. However, as I later listened to the companion podcast to the show, an investigative reporter and expert in the trial-use of BPA evidence made the surprising (to me) statement that BPA is not, in fact, a science. She made the distinction between a science and a technique that I found incredibly revealing and applicable to my own research. In her conception, science is conducted in a systematic fashion with repeatable results that will not vary wildly depending on who is performing the experiment; A technique, on the other hand, is a tool that may (or may not) make use of science but tends to break down when it comes to repeatability. BPA is a technique - developed and proffered almost single-handedly by its creator - that has ballooned to appear as valid science. The reality, however, is that BPA "experts" receive relatively little training, something in the area of 25% of tested analysts failed to arrive at the correct conclusion in a blind study, and the results of any analysis are heavily influenced by the interpretation of the specific analyst performing it.
You might say that's great, or terrible rather, but how does this relate to anything that you are interested in? Well, I would posit that human reliability analysis is another technique (or rather collection of techniques) being passed off in some circles as valid science. When I last counted, there were over 20 HRA methods developed for nuclear power applications alone, with the analysis of any given scenario being heavily dependent on the method employed and the user. In a study reported in 2019, analysts given the exact same scenario reported variances of several orders of magnitude - a lot when you consider that typical values are on the order of 10^-2 or less. The variations seen in results are due in part to a designed flexibility in HRA methods in order to ensure they are appropriate for applications beyond nuclear power operations. However, more concerning are inconsistencies in underlying aspects of HRA - variable definitions and processes that differ significantly between HRA methods. Core concepts, including even the fundamental analytical element (human failure event or HFE), how causal relationships between variables (dependencies) are handled, and how scenarios are broken into manageable elements (task analysis) have no standardized definition or implementation. As a result, HRA is less a science than it is a loose collection of methods that are not particularly aligned. HRA is, therefore, a technique and not a science.
The question now is how to rescue a science from the technique. How do we do this after 50+ years of development on this technique that has become central to safety analysis of complex engineering systems? The answer is that we need a robust, thorough and comprehensive review and standardization of the core aspects of HRA. Rescuing the science from the technique involves determining what is consistent beyond the analyst's perception/opinion, identifying and incorporating fundamental knowledge of human behavior and developing testable, repeatable and traceable models of human-system interaction and human decision-making. Traceability is critical to ensure that HRA methods can be properly scrutinized by experts and facilitate a continuous process of improvement.
Rescuing the science is not an easy challenge, but one that is necessary to ensure that engineering systems are actually safe. Multiple studies have reported that humans may account for as much as 60-80% of industrial incidents, so having a science that can explain and model human performance in these systems is critical to actually ensuring their safety. Part of the engineering code of ethics is holding paramount the safety and wellbeing of the public, and we cannot adequately meet that obligation without a robust science underlying our analysis of human reliability.
In the year 2022, we are repeatedly being faced with the consequences of decisions from the past. Globally, we have neglected to heed the warnings of countless climate scientists that we are approaching a critical turning point in the possible habitability of the planet, and are seeing the continuation of heinous disparities between the "haves" and the "have-nots". A land war in Europe has laid bare the consequences of dependency on imports for basic necessities, and a continuing pandemic has similarly illuminated faults in the on-demand supply chain model. The early joy of a new year dawning on waning COVID-19 case numbers has waned to the hum-drum of continued apathy and mild nihilism in many, myself included.
The entire nuclear community - researchers, technicians, operators and policy-makers - watched somewhat helplessly as Russian troops invaded a nuclear power plant in Ukraine and then worked for hours to stem the onslaught of misinformation and fear-mongering that inevitably followed the news. That was nothing, though, compared to the weeks-long forced labor of the Ukrainian operators who maintained safety against brutal conditions. Despite the non-events at multiple Ukrainian NPPs (as far as nuclear safety is concerned), the war has fueled debate on both sides about the benefits and drawbacks of nuclear power. Adding to the mix is a recently-released Netflix miniseries that is digging up nuclear paranoia from Three Mile Island.
In the turmoil, it can be difficult to envision a future role for nuclear power, but it is essential that nuclear experts and the general public can understand that there is a real need for nuclear power in the future of the global energy economy. Barring some systemic change in how many live their lives, base-load power will continue to be necessary, regardless of whether the current "macro-grid" system remains or is replaced by "micro-grids" at the community level. In either system, nuclear power offers advantages not seen in current "alternative" energy sources (e.g., wind and solar) and avoids the disadvantages of current base-load energy sources (e.g., natural gas and coal). Nuclear power can offer a safe, clean and reliable source of energy that, when supplemented at peak loads by a host of renewable energy sources, can satisfy current and future energy demands for small communities to large-scale regions.
Maybe the most exciting aspect of the future role of nuclear power is in "democratizing" the energy space - bringing power to those who need it and have been overlooked by "the system" - while avoiding further environmental destruction. What we may think of as the "developed" nations were privileged with access to cheap, plentiful and seemingly (at the time) guilt-free sources of mass amounts of energy in coal and oil reserves. Of course, development came at the price of environmental ruin, often shouldered disproportionately by those who reaped none of the benefits. As various areas around the world attempt to perform the same feat, they face economic and environmental barriers that hamper their development. What nuclear power offers is the opportunity to drive development and innovation with fewer economic and environmental barriers - particularly when this occurs with the financial and engineering support of nations that can afford it. This is all in reference to the energy benefits of nuclear power only, leaving aside the multitude of other uses that could benefit users, including process heating, desalination and industry.
The role of nuclear power in the future should be to bring energy and other benefits to communities that have been historically underserved while at the same time shouldering the environmental and economic burdens of the served. Nuclear power can provide the power to drive innovation and development, desalinate water for drinking and irrigation and provide process heat to spur new industry development. It does all of this without emitting dangerous levels of greenhouse gases or filling the region with poisonous slack dumps and industrial waste. Implementing nuclear power on a wider scale will, of course, require a solution to the problem of waste handling (reprocessing!), but fortunately this is a purely political hurdle. The world needs nuclear power.
Well if you've made it this far, there are only two possible explanations why: I have enthralled you with all my other writing and you couldn't resist continuing, or (far more likely) I have forgotten to write many (or any) other blog posts so this is easily viewable. Either way, I greatly appreciate you taking the time to read anything I've written on the site. If you have any questions or comments, please do not hesitate to contact me.