@Web Do some research on https://somecompany.com and write up a detailed overview of what the company does. What might their database schema look like?
I need you to build a mock database for them in duckdb for a demo
Then:
Create a uv project and write a python script to add demo data. Use Faker.
@Web research how many customers they have. Make the database to appropriate scale.
Only takes a few minutes in Cursor, should work just as well in Claude Code. It works really well for the companies core business, but I still need to create one to populate 3rd party sources (e.g. Stripe, Salesforce, Hubspot, etc.).
matthewhefferon 21 hours ago [-]
Cool, I don’t do customer-specific demos, but I like this idea. I might add this use case as an option. Thanks for sharing!
matthewhefferon 22 hours ago [-]
I was tired of digging through Kaggle and writing prompts over and over just to get fake data for dashboards and demos. So I built a little tool to help me out.
It uses GPT-4o to generate a detailed schema and business rules based on a few dropdowns (like business type, schema structure, and row count). Then Faker fills in the rows using those rules, which keeps it fast and cheap.
You can preview the data, export as CSV or SQL, or spin up Metabase with one click to explore the data. It’s open-source, still in early stages, but wanted to share, get feedback and see how you'd improve it.
thenaturalist 20 hours ago [-]
Congrats, thanks for shipping and open sourcing this!
Cool to see Metabase is enabling contributions to the ecosystem this way! :)
matthewhefferon 18 hours ago [-]
No problem, thanks for taking a look!
paxys 20 hours ago [-]
Feature request - make the URL for the OpenAI API configurable. That way one can swap it out with Anthropic or any other LLM provider of their choice that provides an OpenAI-compatible API.
matthewhefferon 20 hours ago [-]
I was actually thinking about this very feature in the shower this morning :)
b0a04gl 21 hours ago [-]
seen this pattern a before too. faker holds shape without flow.
real tables come from actions : retry, decline, manual review, all that.
you just set col types, you might miss why the row even happened. gen needs to simulate behavior, not format
ajd555 20 hours ago [-]
Was looking for this exact comment. I completely agree with this method, especially if you're testing an entire flow, and not just a UI tool. You want to test the service that interfaces between the API and the dabatase.
I've been writing custom simulation agents (just simple go programs) that simulate different users of my system. I can scale appropriately and see test data flow in. If metabase could generate these simulation agents based on a schema and some instructions, now that would be quite neat!
Good job on this first version of the tool, though!
matthewhefferon 20 hours ago [-]
That’s a solid callout, appreciate you pointing it out. I’ll definitely dig into that more.
zikani_03 14 hours ago [-]
This is well put. I once built a tool called [zefaker] (github.com/creditdatamw/zefaker) to test some data pipelines but never managed to get a good pattern or method for generating data that simulates actions or scenarios that didn't involve too much extra work.
Was hoping this AI dataset generator solves that issue, but i guess it is still early days. Looks good though and using Faker to generate the data locally sounds good as a cost-cutting measure, but also potentially opens room for human-in-the-loop adjustments of the generated data.
tomrod 20 hours ago [-]
The best synthetic data are those that capture ingestion and action, instead of just relationship.
Relationship is important, but your data structure might capture a virtually infinite number of unexpected behaviors that you would preferably call errors or bugs.
MattSayar 19 hours ago [-]
I used Anthropic's new Claude API integration with artifacts to make a probably-worse version that you can play with (after logging in of course).
I used this GitHub repo as context and Claude Opus 4 to create this artifact
NitpickLawyer 8 hours ago [-]
Haha, I find this kind of exercise telling for what's coming to the one-size-fits-all SaaS companies out there. I see a future where small teams can in-house the set of features they actually need, and a big drop in SaaS usage. Avoids the big vendor lock-in problems, unwanted features and bypasses all the accenture-style consulting fees.
MattSayar 25 minutes ago [-]
Optimistically, this will allow smaller teams to do more, hopefully incentivizing the consulting places to help out with harder problems.
ChrisMarshallNY 11 hours ago [-]
I wrote a Swift CLI app to generate dummy user profiles for an app we wrote (I needed many more than we’ll actually get, and I needed screenshots for the App Store that didn’t have real user data).
It was pretty “dumb,” and used thispersondoesnotexist.com for profile pics.
klntsky 3 hours ago [-]
You absolutely do not need docker as a requirement here
jasonthorsness 20 hours ago [-]
AI is really good at this sort of thing; I've been using an LLM with Faker for some time to load data for demos into SingleStore: https://github.com/jasonthorsness/loadit
matthewhefferon 18 hours ago [-]
Nice, I like the challenge video!
jasonthorsness 15 hours ago [-]
Ha thanks, appreciate that, I regret the video a little as I was going through a short "a more exciting blog with videos is what the people want" phase.
reedlaw 16 hours ago [-]
"Dataset" connotes training data, but this seems to generate sample data, maybe for testing an application. Is there any use for synthetic datasets in ML?
5 hours ago [-]
dankwizard 10 hours ago [-]
words can have multiple meanings <:- )
smcleod 16 hours ago [-]
This is a bit confusing, I sort of expected it to be a bit like Kiln https://github.com/Kiln-AI/Kiln to generate datasets for AI, but it looks like the outputs are more just data / files than datasets?
wiradikusuma 20 hours ago [-]
"Stack: OpenAI API (GPT-4o for data generation)" -- I wonder if someday we'll have a generic API like how it's done in Java (e.g., Servlet API implemented by Tomcat, JBoss etc), so everyone can use their favorite LLM instead of having to register each provider like streaming services e.g. Disney+, Netflix, etc.
depending on what you're using the synthetic data for, it is sometimes called distillation. here is a robust example from some upenn students: https://datadreamer.dev/
margotli 21 hours ago [-]
Feels like a useful tool for anyone learning analytics or just needing sample data to test with.
It uses GPT-4o to generate a detailed schema and business rules based on a few dropdowns (like business type, schema structure, and row count). Then Faker fills in the rows using those rules, which keeps it fast and cheap.
You can preview the data, export as CSV or SQL, or spin up Metabase with one click to explore the data. It’s open-source, still in early stages, but wanted to share, get feedback and see how you'd improve it.
Cool to see Metabase is enabling contributions to the ecosystem this way! :)
I've been writing custom simulation agents (just simple go programs) that simulate different users of my system. I can scale appropriately and see test data flow in. If metabase could generate these simulation agents based on a schema and some instructions, now that would be quite neat! Good job on this first version of the tool, though!
Was hoping this AI dataset generator solves that issue, but i guess it is still early days. Looks good though and using Faker to generate the data locally sounds good as a cost-cutting measure, but also potentially opens room for human-in-the-loop adjustments of the generated data.
Relationship is important, but your data structure might capture a virtually infinite number of unexpected behaviors that you would preferably call errors or bugs.
https://claude.ai/public/artifacts/eb7d8256-6d21-4c85-af9b-c...
I used this GitHub repo as context and Claude Opus 4 to create this artifact
It was pretty “dumb,” and used thispersondoesnotexist.com for profile pics.