The model is degrading or this is just a nasty “feature” so we spend more $$$ on it? (same as Claude does: you can even complete one feature - on paid plans - without hitting limits (daily and weekly)).
already in the cost cutting phase? when will they raise the prices?
almost every free to use model has this problem, just gets worse and worse.
granted I only use llms for light research and translation most of the time, chatgpt is worse than a year ago, gemini is just… I could never get much use out of the verbosity.
whenever I needed some actual help, the only thing that was useful is deepseek, doesn’t seem that muffled.
Isn’t this just the usual LLM weakness of “context window smaller then context => must compress context => LLM becomes terrible”? Or does this happen on small new projects, too?
Now, it’s the worst code I ever saw. It even ignored my dependency injection and start creating statefulwidget instantiating viewmodels at initState and disposing at dispose (the framework I use do that automatically and that’s the first post I wrote here how amazing it was when it actually learn what I did and understood).
Well… guess I’ll need to code myself… this is sooooo 2024.
Yes, but is the context much larger now, perhaps? I’d test this by cloning the project as it was when you first tried Antigravity, opening the clone as a completely new project, and seeing how it goes. If it’s much worse then before, then Antigravity (or Gemini) really went downhill. If it goes as well as it did the first time, there’s just too much context (maybe even hidden context that Antigravity has tucked away somewhere).
Updated Documentation: Modified AGENTS.md to explicitly state that Date serializes to int and DateAndTime to standard ISO String, highlighting that no custom hooks are needed.
But no changes in the AGENTS.md >.< (which is ln to .agents/rules/default.md)
Just complaining as an old grumpy dude… =P Guess I’m expecting too much from yet another lame LLM =\
The point is: at one point in time it was good, now it is a piece of crap.
I’m not talking about how I misuse the tech. I’m talking about the huge decline of tool quality since I posted this Antigravity is something else 26 days ago.
Without any changes, the tool significantly became much worse. That’s the whole point.
I said something about quota? Don’t remember (insert old man meme here).
Antigravity only gave me one quota kick today (but the task was really huge), but changing the model to Claude “fixed it” (of course, I’m on a paid plan).
I think I complained about Claude, which is impossible to use because there are two quotas (one for each 5 hours, I think, and one weekly, and the Opus consumes everything very fast), that’s why I stop paying for it.
Ah - got it! I think I misunderstood from your original post. Looking forward to seeing what local models you would use and how well they would perform!
Although Ollama responds with a json to initiate a tool usage, no VSExtension or CLI tool seems to understand that.
But, today, when I asked a simple question for deepseek and mistral, I got the stupidest answer I ever saw from a LLM in my life. Even stupidest than ChatGPT (which is terrible at coding).
So, I gave up =\ Guess I’ll continue to pay Google to use Antigravity.
Well, at least 25% of all Google revenue in IA goes to the Flutter team, right?
Using contents of this forum for the purposes of training proprietary AI models is forbidden. Only if your AI model is free & open source, go ahead and scrape. Flutter and the related logo are trademarks of Google LLC. We are not endorsed by or affiliated with Google LLC.