Antigravity became a pain

It’s just me or Antigravity is worsening each day? It was so smart, now it create things like this (after 45 mins):

The model is degrading or this is just a nasty “feature” so we spend more $$$ on it? (same as Claude does: you can even complete one feature - on paid plans - without hitting limits (daily and weekly)).

Antigravity was terrible yesterday; it spent three hours overlooking a simple error, so I had to find it the old-fashioned way myself.

already in the cost cutting phase? when will they raise the prices?

almost every free to use model has this problem, just gets worse and worse.

granted I only use llms for light research and translation most of the time, chatgpt is worse than a year ago, gemini is just… I could never get much use out of the verbosity.

whenever I needed some actual help, the only thing that was useful is deepseek, doesn’t seem that muffled.

Seems from time to time it gets in a loop, if I don’t see action with in 10-15min, I shut it down and restart.

This is a common issue with it.

They do have a feedback “THING”, but yet to see feedback from the team on any thing.

Isn’t this just the usual LLM weakness of “context window smaller then context => must compress context => LLM becomes terrible”? Or does this happen on small new projects, too?

1 Like

I’m on the same project.

In the first days, it seems like magic.

Now, it’s the worst code I ever saw. It even ignored my dependency injection and start creating statefulwidget instantiating viewmodels at initState and disposing at dispose (the framework I use do that automatically and that’s the first post I wrote here how amazing it was when it actually learn what I did and understood).

Well… guess I’ll need to code myself… this is sooooo 2024.

Yes, but is the context much larger now, perhaps? I’d test this by cloning the project as it was when you first tried Antigravity, opening the clone as a completely new project, and seeing how it goes. If it’s much worse then before, then Antigravity (or Gemini) really went downhill. If it goes as well as it did the first time, there’s just too much context (maybe even hidden context that Antigravity has tucked away somewhere).

Do you have rules set up in Antigravity?

And if what?

I didn’t before, now I have. But they are pretty small.

Now I’m working in small chunks, asking a small task, validating, asking it to fix, validating, then, if still doesn’t work, fixing by hand.

Before, it was smart enough to implement a complete feature very well (except for UI, its UI capabilities are very bad, but the core is there).

And now it is lying…

  1. Updated Documentation: Modified AGENTS.md to explicitly state that Date serializes to int and DateAndTime to standard ISO String, highlighting that no custom hooks are needed.

But no changes in the AGENTS.md >.< (which is ln to .agents/rules/default.md)

Just complaining as an old grumpy dude… =P Guess I’m expecting too much from yet another lame LLM =\

As far as I understand the MCP are are pointed to the docs.

If you aske your dev team to read the docs and implement a new feature you might land up with more than one implementation.

Thus you give them a set of rules to guide them on how to implement features on your app.

Same goes for any LLM or Anrigaravity.

The point is: at one point in time it was good, now it is a piece of crap.

I’m not talking about how I misuse the tech. I’m talking about the huge decline of tool quality since I posted this Antigravity is something else 26 days ago.

Without any changes, the tool significantly became much worse. That’s the whole point.

1 Like

Hi @evaluator118 - I am curious did you try using the new Gemini 3 Flash model? It shouldn’t have you run as much into the quota issues.

I said something about quota? Don’t remember (insert old man meme here).

Antigravity only gave me one quota kick today (but the task was really huge), but changing the model to Claude “fixed it” (of course, I’m on a paid plan).

I think I complained about Claude, which is impossible to use because there are two quotas (one for each 5 hours, I think, and one weekly, and the Opus consumes everything very fast), that’s why I stop paying for it.

To be honest, I must stop being lazy and make something with the hardware I have here (I have a gaming PC that I turn on only on weekends, 128Gb RAM, RTX 4090 with 24Gb VRAM). Ollama is already installed and run pretty fast, but I could not make OpenCode to work with it =\ I’ll try GitHub - A2G-Dev-Space/Local-CLI: AI Coding Assistant CLI for offline enterprise environments - Local LLM platform with Plan & Execute architecture, Supervised Mode, and auto-update system eventually.

Ah - got it! I think I misunderstood from your original post. Looking forward to seeing what local models you would use and how well they would perform!

Tried mistral, deepseek and qwen, with Continue.

Although Ollama responds with a json to initiate a tool usage, no VSExtension or CLI tool seems to understand that.

But, today, when I asked a simple question for deepseek and mistral, I got the stupidest answer I ever saw from a LLM in my life. Even stupidest than ChatGPT (which is terrible at coding).

So, I gave up =\ Guess I’ll continue to pay Google to use Antigravity.

Well, at least 25% of all Google revenue in IA goes to the Flutter team, right? :roll_eyes:

1 Like

most likely drone/robot warfare