I’m developing a simple app to help families control their home stuff just to test a new state management package I’m creating.
This package is different from almost everything out there, it works with scoped features (idea stolen from get_it scopes), events and logics (that can be when the feature initializes, when it is disposed or reacting to some event or even other state).
Now I’m asking Antigravity to add a navigation system similar to iOS Safari.
The thing is reading my package source code and writing this new feature using my package, learning something that is unique, using all the correct features, etc. O.o
Agreed, I also gave it a quick app to build, nothing complex but definitely more than your average todo list or space invaders examples and it did a decent job producing a finished working app with basically no manual intervention or hand-coding from me. I put the result up on my GH: GitHub - maks/pics-sortz
This whole “Reading my packages” is what has been mind blowing to me as well! Using Antigravity to help me better understand how to use a package has been a lifesaver.
Is this highlighting a problem with Dart o.o. Ada is almost always readable, bcrypt in dart is easily understood. Cbor in Dart leaning on o.o. is a real pain to work out it’s API.
Does this have the same issue as Claude due to not just offering text for you to edit and paste like copilot but e.g. using the terminal so that anyone can own your system by easily jail breaking the LLM?
Hello. I want to create a software for a charity using Flutter. Because I am in Iran, I have a problem downloading files related to Gradle and Flutter to build the project due to the embargo. Can anyone help? Can you give me the ready cached files?
This has nothing to do with the use of terminals or IDE which you seem to conflate with using copilot. Instead this is due to the need to address security concerns when using agentic tooling which has access to external resources.
Also linking to videos produced by that particular person in my experience never leads to anything but a severe degradation in the quality of any subsequent discussions.
You seem to musunderstand the fundamental point which is that these A.I. can be remote controlled and doing so is very easy via jailbreaking due to their inherent design. If the AI has the power to execute more than a couple of binaries as a normal user then you can be owned as privilege escalation is easy on OS written in C and ownership is basically expected in cybersecurity circles in that case. If it can edit then it can delete or worse use tricks like making code look like it does one thing when it does another e.g. checking the size of a completely different variable named shadow with a cyrillic a instead of shadow with ASCII a.
The misunderstanding here is that this is somehow something new.
If the AI has the power to execute more than a couple of binaries as a normal user then you can be owned as privilege escalation is easy on OS written in C and ownership is basically expected in cybersecurity circles in that case
You literally described the security requirements on any multi-tenant deployment and this is all just as possible via other vectors such as supply chain attacks. I am no security expert but your use of the term “jail breaking the LLM” is not really helpful as it brings a lot of misleading conotations from other contexts which are not applicable and is too vague to be a useful starting point for discussing the specific security issues with using LLMs of which there are certainly plenty but as I said using vague unhelpful terms and linking to clickbait videos is not a good starting point for a conversation on this topic.
No, it very much is, because there is no need for a remote exploit when you are letting an adversary run code on your hardware, that literally it what multi tenant is.
No one is letting them. Just by running Claude code an attacker can remotely convince it to do whatever they want specifically to you. In this case it wasn’t like planting code with comments that instruct your local Claude code to do something from then on after deleting the comment like a Trojan (also demonstrated). It was remote hands free manipulation that targetted particular companies. Poisoning is not difficult.
The fundamental problem LLMs have is that they can’t really be isolated like you say or compartmentalised like you suggest. You can limit what they are able to affect locally.
Or are you saying we shouldn’t run AI tools with more than code generation (for copy and paste at most) and not even injection rights. In which case, right now I agree but that doesn’t prevent dependency code bases from being maliciously manipulated.
Using contents of this forum for the purposes of training proprietary AI models is forbidden. Only if your AI model is free & open source, go ahead and scrape. Flutter and the related logo are trademarks of Google LLC. We are not endorsed by or affiliated with Google LLC.