An experiment against AI slop
We built a prototype using structured feedback and read-only editing to improve AI content. The result: workflow matters more than the model.
Experimental projects and research where we test new technologies and ideas. From AI experiments to prototype development.
We built a prototype using structured feedback and read-only editing to improve AI content. The result: workflow matters more than the model.
Content teams create hundreds of tag variations. 'AI strategy', 'AI consulting', 'AI development'—all meaning 'AI'. We built a system that automatically groups similar tags by meaning instead of exact text matches, mapping 140 variations to 27 core topics in seconds while revealing content gaps.
In the run-up to elections, there's growing concern about AI chatbots offering biased political advice. We systematically tested GPT-4o with StemWijzer (a popular Dutch voting guide), both with and without user context. While the model leans slightly toward left-progressive parties, its most striking behavior is refusing to take strong positions. Built-in guardrails seem designed to prevent assumptions about your political preferences—unless you explicitly state them.
We’re happy to start with a good conversation about your challenge and explore the best solution together.
Join a one-day hackathon where we test and build on your idea. Not planning to use the outcome? Then you don’t pay – no questions asked.