Highlighted Projects

Highlighted Projects

Building Governance Agents

Building Governance Agents

Building Governance Agents

Overview

The GoverNoun project explores the use of AI agents to revitalize decentralized governance within Nouns DAO. Acting as an administrative assistant, community resource, and voting representative, GoverNoun aims to address low participation and lack of strategic direction in DAOs. It enhances governance processes, maintains institutional memory, and helps set a renewed political vision for decentralized communities through AI-powered insights and engagement.

The GoverNoun project explores the use of AI agents to revitalize decentralized governance within Nouns DAO. Acting as an administrative assistant, community resource, and voting representative, GoverNoun aims to address low participation and lack of strategic direction in DAOs. It enhances governance processes, maintains institutional memory, and helps set a renewed political vision for decentralized communities through AI-powered insights and engagement.

Tokenizing Organizational Knowledge

Tokenizing Organizational Knowledge

Overview

In this project, we examine how organizations can recognize and reward human knowledge contributions as AI becomes embedded in organizational decision-making. As people increasingly work alongside AI systems, it is often unclear who deserves credit for ideas, insights, and improvements that emerge from human–AI collaboration. We develop a framework for transparent knowledge crediting in human–AI systems, proposing the use of combined AI and blockchain infrastructures to trace contributions across different types of tasks and knowledge. By clarifying how human insight adds value alongside AI, the research offers guidance for building intelligent organizations that support learning, fairness, and long- term performance.

In this project, we examine how organizations can recognize and reward human knowledge contributions as AI becomes embedded in organizational decision-making. As people increasingly work alongside AI systems, it is often unclear who deserves credit for ideas, insights, and improvements that emerge from human–AI collaboration. We develop a framework for transparent knowledge crediting in human–AI systems, proposing the use of combined AI and blockchain infrastructures to trace contributions across different types of tasks and knowledge. By clarifying how human insight adds value alongside AI, the research offers guidance for building intelligent organizations that support learning, fairness, and long- term performance.

In this project, we examine how organizations can recognize and reward human knowledge contributions as AI becomes embedded in organizational decision-making. As people increasingly work alongside AI systems, it is often unclear who deserves credit for ideas, insights, and improvements that emerge from human–AI collaboration. We develop a framework for transparent knowledge crediting in human–AI systems, proposing the use of combined AI and blockchain infrastructures to trace contributions across different types of tasks and knowledge. By clarifying how human insight adds value alongside AI, the research offers guidance for building intelligent organizations that support learning, fairness, and long- term performance.

Developing Synthetic Stakeholders

Developing Synthetic Stakeholders

Overview

In this project, we examine how emerging technologies can give voice to overlooked stakeholders such as the natural environment or future human generations. We introduce the concept of synthetic stakeholders, in which non-traditional stakeholders are formally recognized and represented by technological agents capable of acting and learning on their behalf. The framework, and ensuing lab experiments, show how organizations can more consistently and responsibly include these stakeholders in decision-making. The project highlights how technology can reshape governance and accountability in the face of long-term and complex societal challenges.

In this project, we examine how emerging technologies can give voice to overlooked stakeholders such as the natural environment or future human generations. We introduce the concept of synthetic stakeholders, in which non-traditional stakeholders are formally recognized and represented by technological agents capable of acting and learning on their behalf. The framework, and ensuing lab experiments, show how organizations can more consistently and responsibly include these stakeholders in decision-making. The project highlights how technology can reshape governance and accountability in the face of long-term and complex societal challenges.

In this project, we examine how emerging technologies can give voice to overlooked stakeholders such as the natural environment or future human generations. We introduce the concept of synthetic stakeholders, in which non-traditional stakeholders are formally recognized and represented by technological agents capable of acting and learning on their behalf. The framework, and ensuing lab experiments, show how organizations can more consistently and responsibly include these stakeholders in decision-making. The project highlights how technology can reshape governance and accountability in the face of long-term and complex societal challenges.

Artificial Intelligence as a Co-founder

Artificial Intelligence as a Co-founder

Overview

In this project, we study how large language models (LLMs) shapes entrepreneurial thinking. Participants were asked to design new ventures and describe why they believed their ideas would work, once on their own and once with the help of an AI tool. By comparing these two experiences, we observe how AI changes the way people reason, connect ideas, and articulate cause-and-effect relationships. We find no significant improvement in idea generation with the assistance of LLMs on average. However, we find effects based on initial performance: participants who started with lower- quality unaided ideas show clear gains, whereas those who began with higher-quality ideas exhibit smaller or even negative effects.

In this project, we study how large language models (LLMs) shapes entrepreneurial thinking. Participants were asked to design new ventures and describe why they believed their ideas would work, once on their own and once with the help of an AI tool. By comparing these two experiences, we observe how AI changes the way people reason, connect ideas, and articulate cause-and-effect relationships. We find no significant improvement in idea generation with the assistance of LLMs on average. However, we find effects based on initial performance: participants who started with lower- quality unaided ideas show clear gains, whereas those who began with higher-quality ideas exhibit smaller or even negative effects.

In this project, we study how large language models (LLMs) shapes entrepreneurial thinking. Participants were asked to design new ventures and describe why they believed their ideas would work, once on their own and once with the help of an AI tool. By comparing these two experiences, we observe how AI changes the way people reason, connect ideas, and articulate cause-and-effect relationships. We find no significant improvement in idea generation with the assistance of LLMs on average. However, we find effects based on initial performance: participants who started with lower- quality unaided ideas show clear gains, whereas those who began with higher-quality ideas exhibit smaller or even negative effects.

IF LAB

Director: Alex Murray

Lundquist College of Business

University of Oregon

1226 University of Oregon, Eugene, OR 97403-1226



EST. 2025

1226 University of Oregon, Eugene, OR 97403-1226


EST. 2025

© IF Lab