Tips on AI tools for coding

But don’t you have to check there too what it is doing?

Oh, of course, but it’s doing something because I’m asking it to, not while I’m thinking of something else.

1 Like

I’m pretty happy with the roo code plugin to vscode, using gemini-flash-2.5 as my LLM. All free, as long as I stay within the 10 requests per minute of flash. Roo code is a fork of cline, and has modes that can be orchestrated with different permissions (like a read-only “ask” or a write MD files only “document”). It also works well with MCPs (easy to install and use).

3 Likes

Good to know about that, @RandalSchwartz . Actually I’m using Cursor and it works 80% ok, IMHO. And it’s paid. I’m gonna take a look into Roo code.

Problem with generated AI code is: it can be good enough most of the time, then you lower your guards and booom… a piece of shitty code is there and you even noticed (yesterday, for instance, I ask for an animation effect and it puts four Theme.of(context) inside the animation… 4 deep tree search inside a block that is run 120 times per second).

Lucky, I don’t trust that shit at all, so I use it basically like that full of energy trainee that is eager to get the right thing and knows how to search for A solution.

So, basically, I use Maven on VSCode mainly because

  1. It autocompletes a lot of things, especially if you give it context. For instance: if I open a CREATE TABLE .drift file in the right window and a .dart file in the left window writing a class to hold the values of that table, it shows the completion for all fields in the table, with correct names and types, most of the time (but not always). So, my work is to double-check what is written in the screen and accept with TAB. But I would never ask for it in a prompt “hey, create a class that holds this values from this table”. The good part about this is that autocomplete seems to be free (while prompting is paid).

  2. I often draw my screens by hand, every single time, so I know they are optimized (or not, but at least is my fault). Sometimes, I have an idea, but I’m too lazy to implement it (for instance: I made a login screen that shows a card at the bottom and a background at the top, so I had the idea to make this card pops up in an animation when I open the app). I then asked in the prompt how it would do it and it generates the code for me. Hitting apply in Maven always leads to corrupted code (it doesn’t know how to diff and apply in VSCode), and, as I mentioned in the first paragraph, it almost always generates shitty code, but the how-to is there. I just copy pieces of the code by hand, fix it in my style and learn how it works. This works very well for me, it’s like working with a dude that knows some things, but it’s not very good at it. At least he is smart enough to point me in the right direction or, at least, give me some ideas on how to do something.

But using something to apply code blindly, never.

1 Like

Just a side note .of() doesn’t do a deep deep search. Context values are copied to their children so the lookup isn’t really that bad.

1 Like

Or use https://studio.firebase.google.com build in Gemini

Or use https://studio.firebase.google.com build in Gemini

Which you can also get to with just firebase.studio or idx.dev.

Idx is now Firebase Studio.

Ixd is no longer.

2 Likes

I use some rules to program, so I don’t need to exactly know what every single black-box function does (i.e.: it’s a function? cache it). :wink: This way I’m 100% sure that I don’t need to worry if the function call is expensive or not. That kind of thinking is lacking in AI in general because, well, they are not I at all. They are just glorified parrots.

I use autocomplete, because for some reason I can’t think and write well at the same time, I can only focus on one thing at a time, also have difficulties with writing code accurately when I’m thinking.

I’m still using codeium(windsurf) extension for auto complete. It didn’t change much since release, most of the time it doesn’t suggest more than one line, which is helpful and not overwhelming. The suggestions are quite good and fast, but not perfect, mostly when it needs to use new code from other files.

So when something prints out a whole line accurately 90% of the time, it’s a big help, I have no issues with quickly identifying if the suggestion is correct and cycling through suggestions. Sometimes I just partially accept the line and write the rest manually. It allows me to do the menial stuff fast.

What does Cody do that’s enhancing your workflow? Does it help you when you need an answer for something that you don’t know?

This is still autocomplete, it just prints many lines which is harder to check, than line by line, at least for me. Windsurf does that too if you write a comment describing what the code should do. But it’s less accurate then doing it line by line.

I didn’t have much luck with getting good solutions by prompts whenever I couldn’t find answers with google. The whole prompting workflow seems cumbersome to me, I’m at that point, where when I can describe the problem accurately I already know the solution.

Maybe it’s different with other languages, but the LLMs I tried are not great at understanding in detail the dart ecosystem. For example when I started using the go router, I needed help with understanding it, I tried to ask chatgpt and gemini but they were very unhelpful and just making up functionality that never existed. They tend to hallucinate if it’s not a popular language or package.

How do you guys use prompts? I really don’t understand how to be productive with them and not wasting time going down useless rabbit holes. They only seem helpful to me with basic stuff when I use them for languages I don’t know much about.

Is there an extension that’s better aware of existing code in other files, when suggesting completions?

Using super maven: usually I write something (for instance, a Scaffold with a Column that have a Spacer() and then a Card). Something like this:

Then, I add the current file to super maven prompt and ask something like “I would love to make this red card popup when I open this page”.

It will generate some code, usually a correct one for those easy scenarios. All I need to do is to read and apply (by hand, because its apply is bugged as hell). While I’m copying the solution by hand, I’m learning* and fixing some issues I found along the way.

If it misses something, I can argue with it and we will do kinda of a pair programming. But this has some limit: if it didn’t understand you for the first 3 times, forget it… it will start to hallucinate or going back to the first solution (which was wrong anyway)

  • lies… I’m too lazy to learn things and do the job myself while paying super maven.
2 Likes

That’s my general experience too. They’re good at throwing ideas at the wall, but spirals out fast. Granted I only really tried chatgpt, gemini and deepseek. I find deepseek is better with problem solving, but I don’t use them often.

You’re absolutely right. In many cases, it does make more sense to just write things myself.

But:

a) Sometimes, I remember to use “chop” (the buzzwordy acronym of AI chat oriented programming) and I’m amazed how quickly I got something done. Code that would probably take me hours of back-and-forth can be done in 20 minutes.

b) Other times, the mere option to be able to chat with a semi-intelligent being about the code is the kick I need to get things going. A sort of rubber duck programming, except the rubber duck can sometimes give you good ideas or insight.

You’re absolutely right that the experience is cumbersome. Some of it is just “new product syndrome”. Things haven’t been worked out, and companies would rather launch something before someone else does. Some of it is inherent to LLMs now, and possibly in the future. LLMs are not smart in the way a person is smart. Their intelligence is alien to us. This obviously brings friction, and maybe this will never change.

Some of the cumbersomeness comes from the fact that we’re “forced” from an individual contributor role into a tech lead. Suddenly we need not only to write programs that do what we want, we also need to be able to explain what we want. These are two different skills.

I, for one, am trying my best to give “chop” a chance despite its cumbersomeness and sometimes downright frustrating “features” (such as the LLM “lying” to me about something just so that it can be consistent, or making me read code that turns out to be total BS).

I find that AI badly needs context on every prompt. If your tool is not aware of the context, you must provide it. Most tools get context from previous prompts. Often there are assumptions that you make about context that you need to include. You can say “using this SQL table definition, generate the code to access it.” Or you can say “in flutter and dart I am writing object-oriented code with functional programming. Generate a DTO based on this SQL table definition, with a proxy class to provide database calls and Data Transfer.” Context seems to help.

That’s the beauty of Super Maven. I can easily share my secret source-code with it (and having sure this won’t be used anywhere else) and I can even say “I need help to do X in the line 45”.

I’ve been using Codeium too and I agree it’s really helpful for autocomplete, especially when writing similar lines of code in Flutter or Dart. In software development, having tools like this can really speed things up and help you focus more on logic than typing. I’ve also tried Tabnine and found it useful for quick suggestions, although the free version has some limits. I’m definitely interested in hearing what other tools people are using, whether for improving code quality or just making the process more efficient.

My stack is now Roo Code plugin for VSCode, using Gemini 2.5 (both flash and pro) models on a Free Tier. When I add MCPs like git and github and midnight commander, that combination is fully agentic and very configurable. And when I get frustrated by ratelimits, I pull out my paid configuration (Roo is able to hold both at once) and blaze ahead.

I use Gemini Code Assist - Google plug in, it is supported on most IDE, what IDE do you us?

What benefit does Roo bring?

GCA does noit yet support MCPs, and I now have MCPs for a dozen things, including my pub.dev search tool. That may change in the future, but I think Roo’s ability to switch (customizable!) modes based on “thinking” still give it an advantage.