INFILLA

Civic tech research project redesigning AI-enhanced search for Infilla Forum, a Q&A platform used by city planners in NYC and SF. Led research and ideation for a 13-week project with Infilla, helping planners find zoning answers faster by using AI to surface regulatory information.

User Research
UX Design
Ideation
Prototyping
Presenting Infilla Project in SF

Presenting the Infilla project in San Francisco

Project Overview

When city planners need answers about zoning regulations, they turn to Infilla Forum—a Q&A platform where they can ask questions and search past discussions to get expert-reviewed answers. However, the issue was that 38.5% of searches were failing. Users either got zero relevant results or got lost in 51+ irrelevant ones.

During the 13-week project duration, I led feature ideation with a 6-person design team to figure out how AI could enhance the search experience. Making it easier for busy civic planners to find the answers to their questions intuitively, with confidence and transparency in the system.

The platform is deployed in major cities such as NYC and San Francisco, directly impacting public service delivery and urban development decisions. When search doesn't work, questions get duplicated, answers get buried, and planners waste time hunting for information that already exists.

Discovery & Research

To understand the problem at hand, my team started by analyzing 2,455 search events from the SF Forum. The data told a clear story: 12% of searches returned zero results due to keyword matching issues (hyphens, abbreviations, typos), and 26.5% returned so many results (51+) that users couldn't find what they needed. Combined, that's the 38.5% failure rate we needed to fix.

Our first step was understanding the market landscape. We analyzed 10+ AI platforms including ChatGPT, Gemini, Perplexity, and NYC's MyCity Chatbot to see how others were building LLM-enhanced search experiences. Since AI-enhanced search is still relatively new—especially in civic and government spaces—there weren't many established patterns to learn from. We had to pull insights from adjacent experiences and piece together patterns, to figure out what would mesh.

Competitive analysis of AI platforms

Competitive analysis of 10+ AI platforms

Congruently, we needed to figure out what was happening during the search process that led to this rate of search failure. So we ran a survey with planning professionals to understand their relationship with AI and how they actually search for information in the system. The survey results sparked our ideation stage for initial designs as to how to make the search experience intuitive, responsive, and trustworthy to support the AI results. Along with the survey, we also conducted concept testing with 5 planners (30-minute sessions using think-aloud protocol) to get a deeper insight on natural searching processes.

Key Insight Leading to Our Pivot:

Our research revealed that planners don't trust AI to interpret or explain regulations. They want to read and verify the actual code themselves.

"I've run into situations at work where a member of the public shows me an AI generated answer that is incorrect, and explaining the issue/providing the correct information can be difficult. Permitting is already confusing for many people, and I'm concerned that any sort of generative AI responses might cause more confusion."
— Survey Respondent

Planning decisions have real consequences—legal issues, project delays, professional credibility. Planners prefer to read quoted citations and have traceability to documents in reference to their questions.

Recent and relevant results also matter, code and regulations change, so users need to know if the sources they're looking at are current and applicable to the premise. Different users search differently—some rely on filters, some type keywords, some do both. There's no single "right" way to search, and there are a variety of searching habits that should be accommodated by the new search experience.

The Pivot

We originally assumed we'd design something like a chatbot—an AI that could interpret regulations and explain them to users in plain natural language. In the competitive analysis we conducted, that was the playing field.

However, our survey insights and stakeholder feedback concluded that planners didn't want AI to explain findings to them. They work in a very self-sufficient manner in which they verify quoted citations themselves, and want to be able to trace the exact parts of code citations for their question.

We saw they want to use AI search as a way to surface relevant results and a research tool rather than an AI assistant that will guide them to an answer. So we pivoted to what we called an "invisible AI" approach. Instead of putting AI front and center as an interpreter, we used it behind the scenes to enhance search quality—catching keyword variations, surfacing relevant sources from multiple places, generating quick highlights—while letting users stay in control of the actual verification process.

Design Process

With our new direction set, we established three design principles to guide every decision:

01 Transparency over automation

AI enhances search quality but doesn't replace human judgment. Always show sources, citations, and evidence trails so users can verify information themselves.

02 Guidance and support

Support users across different tech literacy levels. Provide clear interaction points and don't force everyone into one search pattern.

03 Trust & Safety

AI can highlight relevant information, but it shouldn't interpret legal nuance. Include clear disclaimers and always link back to original sources.

From there, we ran Crazy 8 sketching sessions to rapidly explore concepts, then narrowed down through team critique and client feedback. Our competitive analysis helped us identify core patterns that worked well elsewhere—inline citations, expandable source panels, confidence indicators, suggested follow-up questions.

Design sketches and ideation

Early design sketches and concept exploration

The concepts we explored included:

Side panel previews for quick source evaluation

AI-suggested filter chips based on query analysis

Multi-source results combining Forum posts, codes, and laws

Quick highlights for external documents

Address and attachment-based search filters

The Solution

The final design transforms Forum's keyword search into an AI-enhanced experience that prioritizes transparency over automation.

Enhanced Search Bar

The redesigned search bar addresses the core usability issues. Semantic search catches keyword variations (hyphens, abbreviations, typos) without requiring exact matches. Smart filter suggestions appear based on what you're searching for. We added new filters—Add Attachment and Add Address—for context-specific searching. Overall, we organized everything into References, Categories, and More Filters dropdowns to reduce cognitive load.

Smart filter suggestions based on search query

Add Attachment filter for document-specific search

Add Address filter for location-based search

Multi-Source Results

Search results now pull from multiple source types in one unified view, with relevant results populated at the top: Forum posts with expert-reviewed answers, local code sections, state law references, and external documents. Horizontal navigation tabs let users filter by source type if they want to focus on just codes or just Forum discussions. Each result shows a preview snippet explaining why it matched the search.

Side Panel Preview

This became the standout feature. When you click a result, a side panel opens with a preview—you can see AI-generated highlights, understand why this source was surfaced for your query, and click through to the full document if it looks relevant. All without leaving the search results page.

Side panel preview showing AI-generated highlights and source information

"I like the side panel because I can see what's in [the document] without committing to opening it."
— Concept Test Participant

Trust Signals

We wove trust-building elements throughout the search experience:

  • • "Last updated" timestamps on all sources
  • • Expert review badges on Forum answers that have been verified
  • • AI disclaimer: "Highlights are AI-generated for discovery, not legal advice. Verify details in official text."
  • • Every AI-generated highlight links directly to the source text

Validation & Testing

We tested the designs with 5 planning professionals in 30-minute moderated sessions. Using think-aloud protocol, we walked them through an interactive Figma prototype and observed how they searched, what confused them, and what resonated.

What we tested:

Concept 1

Enhanced search experience (search bar + filters)

Concept 2

Search output experience (results layout + side panel)

What we learned:

Users search in diverse ways—some rely heavily on filters, others type keywords, and some switch between both. So we made sure to support multiple entry points rather than forcing one pattern.

The side panel got strong positive feedback across the board. Users liked being able to preview sources without committing to opening them. This became a priority feature.

There was initial confusion about the new filters (Add Attachment, Add Address). Users weren't sure what they did at first, though they could imagine how they'd use them once explained. This led us to add onboarding tooltips and hover explanations.

Onboarding tooltip for Add Attachment filter

Onboarding tooltip explaining the Add Attachment filter



Users expressed concern about AI accuracy when it comes to legal documents. They worried about nuances in interpretation that AI might miss. So we were careful to position AI features as "highlights" for discovery, not "summaries" or legal advice—and we added clear disclaimers.

Recency came up repeatedly as critical for trust. If users can't tell when a source was last updated, they don't know if they can rely on it. We made timestamps prominent.

"Quick highlight is helpful as long as it's semi-accurate."
— Concept Test Participant

Iterations we made:

Added onboarding flow for new filter features

Added "Why this appears" explanation section in side panel

Added tooltip on hover for filter buttons

Improved visual hierarchy in side panel (clearer separation between metadata and content)

Explored search bar layout variants to reduce visual clutter

Future Ideas & Prospects

There's a plethora of avenues to take this platform. One participant mentioned that planners and citizens would probably review results differently—planners might want to see codes first, citizens might gravitate toward Forum posts. Adaptive result ordering based on user role could be interesting to explore.

Source freshness detection is another opportunity. If AI could flag potentially outdated laws or codes, that would address a key trust concern users raised.

Mobile optimization would be key for accessibility for on-the-ground support or even different user roles such as citizens who are primarily using tablets or phones. The side panel experience would make the design more practical for real-world use.

Team Credits

  • Tanisha Naik
  • Adrian Cardova
  • Job Llorente
  • Madhuri Sharma
  • Maria Elvina
  • Kristine Jiao

Tools & Technologies Used

  • Figma
  • FigJam
  • User Research
  • Competitive Analysis
  • Think-Aloud Concept Testing
  • Crazy 8 Sketching
  • Prototyping
  • Usability Testing
  • v0
  • Claude Code
  • Presentation Decks