24 million tokens. That's what a five-year behavioral simulation of 76 audience segments costs in compute. In real money — using a cost-efficient model — that's about $15. BIG SHIFT!
curious as to the quality of the responses (output). If I have 10 or 50 or 1000 or ALL agents (just asking an LLM) aren't we just converging towards the mean of all possible answers/responses and outcomes? Please explain the value here.
The thing is, the number of segments is auto-generated by the system. It depends, as I understand it, on data quality and the projected longevity of the report needed for the future. First experiment, described here, spawned 76 and burnt 24M, second — 123 and 57M. I can't say the results are so different. But the fields of study are competently different.
Great read thanks for sharing!
Find a time to try it. Ir is extremely good and fascinating
curious as to the quality of the responses (output). If I have 10 or 50 or 1000 or ALL agents (just asking an LLM) aren't we just converging towards the mean of all possible answers/responses and outcomes? Please explain the value here.
The thing is, the number of segments is auto-generated by the system. It depends, as I understand it, on data quality and the projected longevity of the report needed for the future. First experiment, described here, spawned 76 and burnt 24M, second — 123 and 57M. I can't say the results are so different. But the fields of study are competently different.
just WOW as always!
Thanks! You should try this. But be careful with tokens ;)
Now I'm on combining EURISKO with LLM with all my free time ;)
You'll not regret.
Hope so...