A new chapter for JSTOR’s AI research tool: Reflections on community engagement, insights from ALA, and what’s next
July marks a major milestone for JSTOR—our AI research tool (formerly known as the interactive research tool) has officially rolled out to all JSTOR-participating institutions. Originally released in limited beta in 2023, the tool is designed to help users explore JSTOR’s trusted content more deeply, uncover new insights, and enhance their research with support from AI and other advanced technologies. Developed in collaboration with librarians, faculty, and students, and refined with feedback from users worldwide, the tool is now available to all JSTOR participant users as part of our commitment to providing functionality that yields productive, meaningful research experiences.
In this blog post, we provide a background on the tool’s development, highlight recent updates and enhancements to the tool, and reflect on how we engaged with our community in conversations surrounding the tool at this year’s American Library Association conference.
The journey
The development of our AI research tool dates back to early 2023, when we began investigating how we could leverage new technology to solve common obstacles JSTOR users face, such as evaluating, finding, and understanding content more effectively. Grounded in guiding principles aligned with the values of the academic community and our mission to expand access to knowledge and education, we designed the tool collaboratively from the outset, with our community providing crucial input and feedback at every step of the way.
In August 2023, we launched the tool in a limited beta, allowing us to learn quickly and iterate in a low-risk setting. Since then, we’ve been listening closely to our community’s feedback, and inviting engagement at every turn. We’ve created space to hear from users directly at conferences, actively encouraged open dialogue, and shared insights through presentations and blog posts. We’ve also worked closely with librarians, faculty, students, and publishers to ensure the tool complements (rather than complicates) the research process, and shared success stories and case studies to inspire others to get the most out of the new technology.
After two full years of community feedback and confirming we could sustainably offer the tool, we now have the evidence we need to move forward with a full rollout and remove the beta label. Throughout this learning process, user feedback and a deeper understanding of community needs and preferences have helped us refine and improve the tool’s functionality—ultimately enabling more dynamic, equitable, and meaningful engagement with JSTOR’s trusted corpus.
What’s new with JSTOR’s AI research tool?
The latest updates are based on usability testing, A/B experiments, and direct feedback from librarians and users—focused on making the tool easier to find, smoother to use, and more intuitive in real research moments.
- Smarter text selection
Highlighting a passage to learn more is now faster and more responsive. We’ve streamlined the experience in JSTOR’s PDF viewer so that selecting text and triggering an AI research tool action feels seamless.
- Improved access toggle visibility
Previously, many users didn’t realize the tool was available or how to activate it. We’ve moved the on/off toggle switch to a more prominent position, significantly improving visibility and making it easier to turn on or off.
- Streamlined login experience
Some users were confused when asked to log in again after enabling the tool—especially if they were already authenticated via their institution. We’ve updated the login prompt with clear, custom messaging to differentiate between institutional and personal accounts, empowering more users to get the most out of JSTOR by creating their own account.
- Improved mobile functionality
About 30% of JSTOR traffic comes from mobile users, and we’ve addressed bugs that previously disrupted the research tool experience on smaller screens. The tool is now more responsive and user-friendly across all devices.
Reflections from ALA 2025

Ashank Rai, Product Manager, at the research tool demo during ALA.
At the American Library Association’s annual conference in Philadelphia, ITHAKA team members Amy Gay (Senior Digital Humanities Outreach Manager) and Ashank Rai (Product Manager) hosted live demo sessions of the AI research tool at the JSTOR booth. Their insights illuminate ongoing conversations around AI technologies in classrooms and help us better understand the concerns from users’ perspectives as we move forward with the research tool rollout.
What was our goal in holding live demo sessions at ALA this year?
Ashank: My goal was to help librarians better understand JSTOR’s AI research tool—how it works, what it’s designed to do, and how it fits into academic workflows. As we were on the cusp of rolling out the tool to all JSTOR-participating institutions, we wanted to showcase what the tool can do and address any questions and concerns from the community. In the education space, there are a variety of questions and concerns on the impact of AI to learning. ALA gave us the opportunity to meet these concerns head on, and show how the AI research tool is meant to work with—and not for—the student.
Amy: I wanted to learn more about how generative AI is already being used by students, instructors, and librarians for research and teaching purposes. I also wanted a better understanding of how we can support these communities with resources related to JSTOR’s AI research tool and where they see the tool being valuable, particularly from a teaching standpoint. Skills like AI literacy and prompt literacy are becoming essential, and that means educators need support—not just in using the tool, but in learning how to teach with it effectively.
Was there any conversation or feedback that stood out to you?
Amy: Yes—several librarians asked about the possibility of partnering with them and others at their institutions, such as Centers for Learning and Teaching, to have webinars for faculty on how to use the AI research tool with their students. They mentioned that there has been interest at their institutions but faculty are unsure of how to get started with integrating it into their teaching. This was something we hadn’t previously considered, but it opened our eyes to a new way we can support institutions: not just through the tool itself, but by being a resource for faculty development.
What did attendees like about the research tool?
Ashank: The principles we employed while building out the research tool include keeping the content as the focus, and making sure we are aiding the research process rather than providing the end product. To support this principle, we built the tool to use only the text in front of it to generate its response, and to cite exactly where in the article its response comes from. Along these lines, there are other features within the tool that allow researchers to show that they used the tool when doing their research, including downloading conversation history and citing tool responses themselves. This commitment to transparency and visibility seemed to really resonate with attendees.
Amy: To add to that—many attendees appreciated that the tool is confined to JSTOR content. It doesn’t pull from the open web, which makes it feel more focused and trustworthy.
What common concerns and/or questions did attendees raise, and how did you respond?
Ashank: A lot of people brought up the environmental impact of generative AI, which is absolutely valid. We’re actively considering those impacts as well.
With the AI research tool, we have tried to do our own part. The whole system is plug and play, and we can swap out to a more efficient—and less environmentally impactful—model that meets the needs of the users pretty seamlessly. In fact, we already use small language models (SLMs) for certain tasks instead of large ones (LLMs), depending on the need. Similarly, if we know a response is not going to change for a given article, such as a user asking what is this text about, we save those responses for reuse, making the whole system more sustainable.
(Editor’s note: You can now see our response to the tool’s environmental impact at the research tool’s FAQ section here.)
Where can I find more information about the AI research tool?
You can find more information, existing users’ experiences with the tool, and FAQs on our main AI research tool page.
We understand you may need to learn more about the tool before deciding to adopt it at your institution and in your teaching. For librarians, faculty, and anyone curious about the tool, we invite you to join or view our webinar, including a research tool demo, insights into its technical foundations and design, and opportunities for incorporating the research tool into teaching and learning. Unable to attend the live webinar? Register now and you’ll receive a link to the recording afterward.
Have you incorporated the research tool in your teaching or research? Let us know!
About the authors

Rumika Suzuki Hillyer is a Content & Community Engagement Manager at ITHAKA, where she leverages her teaching background and social media skills to connect with a diverse range of JSTOR users. From enrolling in an ESL program at a community college to earning a doctoral degree in sociology, Rumika has developed a comprehensive understanding of various tiers of higher education in the U.S. and their associated challenges. She is excited to embark on her journey with ITHAKA, where she hopes to contribute to its mission and promote accessible and equitable higher education for all.
Amy Gay has been at ITHAKA for two years as their Senior Digital Humanities Outreach Manager. Being part of the world of digital scholarship for nearly ten years, she enjoys being integrated in such a vibrant community that is continuously evolving.
Ashank Rai is a Product Manager at ITHAKA, where he works with a cross-disciplinary team to build tools that help researchers get more out of JSTOR’s rich content. He’s especially interested in creating thoughtful experiences that support deep learning and teaching.