AAAI 2026 Main Conference

January 24, 2026

Singapore, Singapore

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

Investigating the effects of climate change and global warming caused by GHG emissions have been a central focus worldwide. These emissions are largely contributed to by the production, use and disposal of consumer products. Thus, it is important to build tools to estimate the environmental impact of consumer goods, an essential part of which is conducting Life Cycle Assessments (LCAs). LCAs specify and account for the appropriate processes involved with the production, use, and disposal of the products. We present SpiderGen, an LLM-based workflow which integrates the taxonomy and methodology of traditional LCA with the reasoning capabilities and world knowledge of LLMs to generate this procedural information used for LCA. We additionally apply evaluation methods for this use-case, and evaluate the output of SpiderGen with real-world LCA documents. We find that SpiderGen provides accurate LCA process information that is either fully correct or has minor errors on average 60\% of the time. We observe that the remaining missed processes and hallucinated errors occur primarily due to differences in detail between LCA documents, as well as differences in the understanding of scope" of which auxiliary processes must also be included. We also demonstrate that SpiderGen performs better than several baselines techniques, such as chain-of-thought prompting and one-shot prompting. Finally, we highlight that SpiderGen has the potential to drastically reduce the human effort and costs for estimating carbon impact, as it is able to produce LCA process information for less than \$1 USD in under 10 minutes as compared to the status quo LCA, which costs over \$25000 USD and take up to 21-person days.

Downloads

PaperTranscript English (automatic)

Next from AAAI 2026 Main Conference

Scaling Equitable Reflection Assessment in Education via Large Language Models and Role-Based Feedback Agents
poster

Scaling Equitable Reflection Assessment in Education via Large Language Models and Role-Based Feedback Agents

AAAI 2026 Main Conference

Xiaohang Luo and 1 other author

24 January 2026

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved