Newsletter N1, Fall 2025
Dear colleagues,
Welcome to the first edition of 2025-2026 the WVU AI Update, a monthly newsletter curated for faculty interested in how artificial intelligence is transforming education, research, and professional practice—both globally and here at West Virginia University (right now is only but I would appreciate if you can send me any news you want to share with the rest of the faculty).
This newsletter is organized into three brief sections:
- AI in the World: A selective (and admittedly biased!) roundup of news and developments in AI that we believe are worth your time.
- AI @ WVU: Local updates, faculty highlights, and ongoing projects involving AI on campus.
- Community & Conversations: Updates on our monthly faculty AI gatherings, including scheduling, speakers, and participation details.
AI in the World:
Designing Ocean Gliders with AI: Mimicking Nature for Climate Insight
In an impressive fusion of machine learning and marine engineering, researchers from MIT’s CSAIL and the University of Wisconsin-Madison have unveiled an AI-driven pipeline that generates unconventional, highly efficient designs for autonomous underwater gliders. Inspired by the biomechanics of marine life, their system uses neural networks and physics simulations to evolve 3D-printable forms with superior lift-to-drag ratios—key for energy-efficient navigation. These AI-designed gliders, resembling sleek jets or flat-bodied sea creatures, outperformed traditional torpedo-style models in experimental trials. Beyond elegant engineering, the work aims to enhance oceanographic monitoring by enabling more agile, low-energy vehicles capable of gathering climate-critical data like water salinity and temperature. Read the full story from MIT News: AI shapes autonomous underwater gliders. For those interested in technical details, the preprint is available on arXiv. This project also reflects a broader trend in AI-driven bioinspired design, as seen in recent DARPA-backed initiatives for soft robotics and shape-morphing vehicles. The implications for ocean research, environmental monitoring, and scalable robotics are profound—and well worth following.
AI-Powered Microscopy Sheds Light on Blood Clots Before They Strike
Imagine preventing heart attacks before they happen—by simply watching how your blood behaves in real time. A groundbreaking study from the University of Tokyo combines ultra-fast microscopy with artificial intelligence to monitor how platelets form clots, offering a personalized and noninvasive window into cardiovascular risk. By analyzing thousands of flowing blood cell images per second using an AI-augmented microscope, researchers discovered that even a simple blood draw from the arm could mirror the clotting activity in coronary arteries. This could revolutionize how cardiologists assess and fine-tune antiplatelet treatments, reducing guesswork and side effects. With over 200 patient samples validating its accuracy, this approach could transform routine checkups into real-time diagnostics. Read more on ScienceDaily or dive into the original research in Nature Communicationshere. As AI continues to push the boundaries of precision medicine, tools like this one mark a new era of personalized cardiovascular care.
From Chatbot to Coworker: ChatGPT Agent Takes On Real-World Tasks
OpenAI has just launched the next leap in AI usability: the ChatGPT agent, an integrated system that doesn’t just answer questions—it acts. By combining advanced reasoning, web browsing, code execution, and app integration, the new agent mode transforms ChatGPT into a powerful digital assistant that can plan trips, analyze financial data, summarize your inbox, or even create editable presentations from scratch. Running on its own virtual computer, the agent fluidly switches between tools—browser, terminal, APIs—to accomplish multi-step tasks with minimal user input while ensuring you remain in control at all times. Designed for professionals and power users, it sets new benchmarks in real-world task completion, spreadsheet manipulation, and even financial modeling, often rivaling or outperforming humans in controlled trials. Safety remains a priority, with enhanced protections against prompt injection and real-time oversight tools. Available now for Pro, Plus, and Team users, this launch marks a turning point in agentic AI. Learn more from OpenAI’s announcement: Introducing ChatGPT Agent.
Large Language Models Demystified: A Strategic Primer for Enterprises
In an era where language is data and data is power, understanding how large language models (LLMs) work is crucial for forward-thinking enterprises. NVIDIA’s A Beginner’s Guide to Large Language Models provides an accessible yet detailed introduction to LLMs, charting their evolution from rule-based NLP systems to today’s transformer-based behemoths like GPT-3 and PaLM. The guide walks readers through key concepts—neural networks, self-attention, unsupervised learning—and highlights both the promise and pitfalls of LLM adoption. Enterprises stand to gain significant advantages through LLMs: faster content generation, smarter automation, more precise analytics, and even the ability to build custom AI solutions via fine-tuning or parameter-efficient techniques like adapter tuning. However, ethical, interpretability, and infrastructure concerns must be addressed to ensure safe and responsible deployment. For those just entering the AI space or considering developing in-house models, this primer offers a solid foundation. You can access the guide [here (PDF)] or explore more enterprise-ready AI insights on NVIDIA AI.
Can You Trust AI’s “Thought Process”? Anthropic’s Claude Opens the Black Box
What happens inside a large language model when it “reasons”? Anthropic, the team behind the Claude AI assistant, is pioneering new tools to answer exactly that. In a recent study, researchers introduced a method they call “dictionary learning” to map the inner workings of Claude’s neural activations into over 1,000 human-interpretable concepts—ranging from emotional tones like “sarcasm” to structured ideas like “geography” or “gender.” This breakthrough gives us unprecedented visibility into how LLMs encode and manipulate knowledge internally, offering promise for debugging, bias detection, and improving model alignment. While still early-stage, this research marks a major step toward interpretability and transparency in foundation models. For educators and developers alike, it opens a fascinating conversation: If we can finally see what an LLM is “thinking,” how might that shape how we use and trust it? Dive deeper into the research on Anthropic’s blog: Inside Claude’s “thoughts”.
Faculty Left Behind? AI Adoption in Universities Raises Red Flags
A recent Inside Higher Ed article (July 22, 2025) reveals a concerning disconnect in higher education: while nearly 90% of surveyed institutions are integrating AI into teaching and research, faculty are largely sidelined in those decisions. According to the American Association of University Professors (AAUP), 71% of respondents say administrators are driving AI adoption without meaningful faculty input. This lack of shared governance has serious implications—from unclear AI policies to concerns over data privacy, academic freedom, and job security. Many educators unknowingly use AI-powered tools embedded in platforms like Canvas or Google Suite, while others are aware but use AI mainly for undervalued tasks like drafting recommendation letters or internal reports. Despite AI’s promise, 76% of faculty say it’s dampening morale, and 69% believe it’s harming student outcomes. The report calls for a more democratic and cautious approach—empowering faculty to shape how, when, and why AI enters the classroom. As AI becomes increasingly embedded in higher ed, the question lingers: will faculty shape the future of education, or be reshaped by it?
AI @ WVU:
📣 Coming Soon: Elsevier Interview with Aldo Romero
On August 14, Elsevier’s Physica B will publish an Expert Insights interview with Dr. Aldo Romero, highlighting his pioneering work at the intersection of AI and materials discovery. From graph neural networks to open datasets for magnetic and 2D materials, the conversation explores both opportunities and limitations in using AI to predict and design functional materials. Key takeaways include the importance of transparent data, reproducible benchmarks, and the evolving role of AI as a decision-making assistant in condensed matter physics. [More to come when the issue goes live.]
📄 ChatGPT and the Risk of Miscrediting Human Work
A recent research letter by Dr. Gangqing Hu, Dr. Peter Perrotta, and colleagues from WVU’s School of Medicine (published in JAAD International, Oct 2025) investigates how manuscripts polished with ChatGPT are frequently misidentified as AI-generated by detection tools like GPTZero. The study highlights fairness concerns for non-native English speakers and calls for better AI detection tools and transparent disclosure practices. Read the article
🤖 AutoTA: A WVU-Built Virtual Teaching Assistant
Thanks to Prashnna and Michael for sharing their latest work on AutoTA, a dynamic, intent-based virtual teaching assistant built using open-source LLMs. The system supports student interaction through real-time query understanding, proactive clarification, and modular AI agent architecture—all deployable within open-access environments. Their paper explores educational alignment, prompt engineering strategies, and technical evaluations across student use cases. A great step forward for equitable, scalable AI-enhanced education. Access the preprint
Community & Conversations:
🗓️ Our first meeting of the semester will take place on Friday, August 29, 2025, at 10:00 a.m. As always, if this time conflicts with your schedule, please reach out to alromero@mail.wvu.edu. If needed, we’ll circulate a new Doodle poll, as we did last year.
Stay tuned for an announcement about our August speaker—currently in discussion and to be confirmed in the coming days.
Let’s keep exploring, questioning, and building together.