This article provides an overview of the automated evaluation of reviews and surveys in flowit.
It explains which analysis features are available, when and how to use them, and why they are valuable for surveys and reviews.
Note: The Review/Survey Evaluation and Dashboards are an optional feature. If it is not available in your account, please contact your Customer Success representative to learn more.
π Overview
The Evaluation Pipeline automatically creates dashboards from survey and review data.
As soon as a survey is closed or review groups are exported, processing starts automatically.
The result is a structured, AI-powered dashboard containing insights at company and team level.
What the pipeline delivers:
π Automated dashboard creation with fast turnaround
β
βπ₯ Team insights for teams with β₯ 5 members (surveys) or β₯ 3 members (reviews)
β
βπ Multi-language support for dashboards in all flowit system languages
β
βπ§© Immediate use of user attributes for filtering and segmentation
β
βοΈ How the Evaluation Pipeline Works
π Surveys
Trigger:
The survey submission end date has passed.
Process:
Survey closes
β
βSurvey responses are anonymized
β
βThe pipeline processes the responses (they become available in connected insights)
β
βThe dashboard is generated
β
β
Important:
Once processing has started, it cannot be undone, even if the survey is reopened later.
π Reviews
Trigger:
When a review cycle has ended a customer success agent can trigger the evaluation of a group of reviews in the backend.
Process:
The customer success agent starts the processing of a review group.
β
βAll review data are anonymized
β
βPipeline processes the responses (they become available in connected insights)
β
βDashboard is created
β
β
Important:
Once processing has started, no changes are possible.
π What Insights Are Generated?
Sentiment classification provides an initial understanding of the overall tone, while the detection of most important topics highlights the core subjects driving the feedback. Finally, AI-generated summaries offer concise, digestible overviews for quick consumption.
β
The following summaries give a specific perspective
Management Summary: Executive overview
Pain Points: Key challenges and issues
Recommended Actions: Suggested next steps
Follow-up Questions: Ideas for deeper analysis
Correlation Analysis: Numeric relationships (β₯ 10 paired responses)
β
π₯ Team-Specific Insights
Insights are generated for:
The entire company
Each team with β₯ 5 members (surveys) or β₯ 3 members (reviews)
Since January 31. 2026, Teams are evaluated including subteams and excluding subteams if they are big enough.
π Multi-Language Support
Flowit supported 6 system languages.
(DE, EN, FR, IT, ES, PT-PT).
In the AI-dashboards the following components are translated.
Automatically translated:
Questions & sections
β
βAnswer options
β
βNumeric labels
β
βAll generated summaries
β
βCluster names
β
β
Not translated:
User attributes
β
βTeam names
β
βOriginal user text responses
β
β
β Requirements & Technical Boundaries
In order to start an evaluation of a survey or review the following things must be fulfilled:
Surveys must be closed
β
βReviews must be exported as groups by customer success
β
βThe review periods of all reviews which are evaluated together must match
β
βall reviews which are evaluated together must have the same template
β
βMinimum team size required for team insights
β
β
β Frequently Asked Questions
Why donβt small teams receive individual insights?
To protect anonymity and ensure privacy. Teams below the minimum size are included only in the company-wide analysis or the analysis of their parent teams.
Can a dashboard be recreated later?
No. Once the processing started it cannot be evaluated again.
Can multiple templates be combined in one dashboard?
No. A uniform question structure is technically required for processing and display on the dashboards.
π§Ή Draft Handling
Only completed submissions are processed
β
βDrafts are automatically excluded
β
βAll metrics reflect final responses only