PIT-NE Generative AI in Higher Education with a PIT Perspective Panel

On December 3rd, 2024, the AI in Higher Education Initiative Group of Public Interest Technology – New England hosted a panel to discuss approaches to integrate generative AI in higher education.  The virtual event, titled Generative AI in Higher Education with a PIT Perspective, brought together panelists from multiple perspectives to talk about the adoption of generative AI at their institution and how a PIT lens can help shape solutions. 

  • Eric Gordon, Professor of Media Art and Founder of the Engagement Lab, Emerson College
  • Beth Simone Noveck, Director, Burnes Center for Social Change; Professor, Northeastern University
  • Yannis Paschalidis, Director, Hariri Institute for Computing and Computational Science & Engineering, Boston University
  • Yunus Doğan Telliel, Assistant Professor of Anthropology and Rhetoric, WPI
  • Moderated by Colette Basiliere, PIT-NE Executive Director

The event began with each panelist discussing the current role of generative AI at their institution.  Yunus Telliel started by discussing his work with colleagues developing four week micro courses focused on critical AI literacy for WPI faculty focused on misplaced presumptions regarding AI systems, neutrality and accuracy. The goal of this work was to create a unified framework around AI on campus because each department had its own concerns about its use based on the focus of its discipline.   

Yannis Paschalisdis spoke next about his time on the Boston University AI Task Force set up by the Provost where they concluded that AI should be embraced on campus with a critical lens and classes should move from product-focused to process-focused to achieve educational outcome while integrating tools like ChatGPT.  In addition, AI has transformed research at the Hariri Institute because more sophisticated models can serve more diverse needs and physical labs can be transformed to serve a larger population.   

Beth Simone Noveck discussed her work which focuses more on external audiences at the intersection of government and technology and how Generative AI’s popularity is creating a movement around upskilling in the public sector when eduating the current workforce has not been a priority for decades.  Generative AI also has the ability to make government more efficient so Beth has been delivering trainings to enable the adoption of AI in the public sector.   

Eric Gordon echoed the importance of educating government employees which will free humans from mundane tasks and allow them to focus on tasks that are “distinctly human”.  Eric said the same logic applies to community processes and the use of generative AI tools has allowed for more imagination and a different kind of output.  The fact that AI is now a tool in our toolkits means that there is more urgency to figure out what this design space and process looks like. 

The next section of the panel focused on the challenges that the panelists face in the adoption of AI and how Public Interest Technology is included in those conversations.  

Yunus began by discussing the need to balance the public interest with the private interests that are inherent to the space because for-profit companies are the main drivers of Generative AI tools.  While Public Interest Technology can serve as a wedge to create space for values like ethics in these conversations, Yunus thinks the goal of this work should be to create an alternate space where folks can interact with these tools and learn how to adapt while having honest conversations about their concerns and the value added of the tools.

Yannis jumped in to add how commercial AI tools create challenges around privacy and ownership because information from interactions may be used to train future models. He pointed out that commercial models are also generic but academia typically needs specialized models so higher education needs to maintain their ability to build these models. This leads to concerns about the increasing gap between the computational resources of big tech companies and academia.  

Adding to the topic, Beth discussed how the trend of universities restricting hiring and graduate admissions in social sciences and the humanities due to the attraction of funding AI is hindering multidisciplinary research in this space.  The key is to ask questions like “What are the questions that matter?” and “What are the problems that merit solving?” to try to strike a balance between work that is computationally interesting, work that critiques AI, and work that is building solutions to public interest problems with this technology.  

Eric responded to these remarks by talking about how applied research is not neglected in academia and it is important to find justification for applied research within existing disciplines.  He brought up the crisis of perception of Generative AI with individuals being blindly celebratory or deathly afraid.  A lot of the community fear stems from privacy concerns, especially with the government collecting data, so there is a need for sandboxes to create a sense of security before the full adoption of these technologies.

Eric posed the question of how to effectively create these sandboxes to the other panelists as the final topic.

Yannis began with remarks about how this can happen by partnering with industry since there are efforts where you can download a commercial model and fine tune it for your needs with your own data.  There are other times when building your own model is necessary, especially in research when you have data use and IRB agreements.  Yannis added that regulation needs to come from the national level, similar to the European Union, and open source models will help with the development of models at all institutions.  

Yunus chimed in with comments about the importance of building interdisciplinary advising teams because people need critical thinking skills and curiosity to effectively build and evaluate models.  He added that humans will need to decide how they want to position themselves with respect to AI to better determine what AI we want to build and how we think it should be used.  

Beth continued on this topic by adding that while AI is not the final solution, it is creating interest in areas like upskilling that we have not seen before, so we need to take advantage of this moment.  The answer to the question of “What will we do with this opportunity?” in Beth’s opinion is divided between the individuals buying contracts for AI resources and the individuals creating the pedagogy, scholarship, and research to integrate them.  No one can say if any institutions at this time have done an effective job of buying the tools, educating individuals, creating sandboxes, and evaluating how this is altering higher education. 

Eric thanked the other panelists for their responses and echoed that this feels like an urgent matter both on the technical side and the human side.  

A question for the chat was answered about experiential learning programs in this space with mentions of Northeastern University’s AI for Impact Coop, PIT-NE’s Impact Technology Fellowship, and Boston University’s Spark!. 

PIT-NE thanks the panelists for their time and insights and looks forward to hosting more of these events in the future.