![]() Google is already using this technology to help speed up the process of resolving code review comments. So hopefully this will improve over time.” What are people building and experimenting with today? Make sure that it's performant, make sure that it's fast, and then give me that code,” and then that's what's finally displayed to the user. So there's already been research done from Google Brain showing that you can kind of recursively apply LLMs such that if there's generated code, you say, “Hey, make sure that there aren't any bugs. “Over time, I do have the expectation that large language models will start kind of recursively applying themselves to the code outputs. Still, Bailey believes that some of the work of checking the code over for accuracy, security, and speed will eventually fall to AI as well. “Think of code produced by an AI as something made by an “ 元 SWE helper that's at your bidding,” says Bailey, “and that you should really rigorously look over.” Paige Bailey is the PM in charge of generative models at Google, working across the newly combined unit that brought together DeepMind and Google Brain. We recently had a conversation with some folks from Google who helped to build and test the new AI models powering code suggestions in tools like Bard. “I think there is a risk of accumulating lots of very shoddy code written by a machine,” he said, adding that companies will have to rethink methodologies around how they can work in tandem with the new tools’ capabilities to avoid that. “People have talked about technical debt for a long time, and now we have a brand new credit card here that is going to allow us to accumulate technical debt in ways we were never able to do before,” said Armando Solar-Lezama, a professor at the Massachusetts Institute of Technology’s Computer Science & Artificial Intelligence Laboratory, in an interview with the Wall Street Journal. So we're rapidly entering a world where we're going to have to come up with software engineering best practices to make sure that we're using GenAI effectively.” It's not getting any easier when AI generates this huge amount of code. It's very, very difficult to do that and have meaningful feedback. “I mean, think about reviewing a 7,000 line pull request that somebody on your team wrote. The pace and volume at which these systems can output code can feel overwhelming. “One of the things that I'm hearing a lot from software engineers is they're saying, ‘Well, I mean, anybody can generate some code now with some of these tools, but we're concerned about maybe the quality of what's being generated,’” says Forrest Brazeal, head of developer media at Google Cloud. The ability of LLMs to quickly produce large chunks of code may mean that developers-and even non-developers-will be adding more to the company codebase than in the past. Or hey, have you tried turning it off and then back on again?ĭevelopers love automating solutions to their problems, and with the rise of generative AI, this concept is likely to be applied to both the creation, maintenance, and the improvement of code at an entirely new level. Maybe you want to add a little disk space or delete some files when you get a warning that utilization is at 90% percent. ![]() Previous self-healing code programs are clever automations that reduce errors, allow for graceful fallbacks, and manage alerts. If your software is any good, that error message will say exactly what was wrong and point you in the direction of a fix. When code fails, it often gives an error message. ![]() Head over to Stack Overflow’s CI/CD Collective and you’ll find numerous examples of technologists putting this ideas into practice. Several have written in the past on the idea of self-healing code. Most developers are already familiar with processes that help automate the creation of code, detection of bugs, testing of solutions, and documentation of ideas. This gets really interesting when applied to the world of software development and CI/CD. While the field is still developing fast, and factual errors, known as hallucinations, remain a problem for many LLM powered chatbots, a growing body of research indicates that a more guided, auto-regressive approach can lead to better outcomes. Ask it to solve a problem by showing its work, step by step, and these systems are more accurate than those tuned just to find the correct final answer. Feed the model its own response back, then ask it to improve the response or identify errors, and it has a much better chance of producing something factually accurate or pleasing to its users. One of the more fascinating aspects of large language models is their ability to improve their output through self reflection.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |