llms Choose Boring Technology and LLMs By www.maragu.dev Published On :: Fri, 08 Nov 2024 12:20:41 +0000 Comments Full Article
llms Github - open-webui/open-webui: User-friendly WebUI for LLMs (Formerly Ollama WebUI) By github.com Published On :: 2024-11-14T05:47:01+00:00 Full Article
llms Researchers detail RoboPAIR, an algorithm that is designed to induce robots, relying on LLMs for their inputs, to ignore models' safeguards without exception By biztoc.com Published On :: Thu, 14 Nov 2024 07:02:23 GMT AI chatbots such as ChatGPT and other applications powered by large language models (LLMs) have exploded in popularity, leading a number of companies to explore LLM-driven robots. However, a new study now reveals an automated way to hack into such machines with 100 percent success. By… Full Article
llms Can LLMs Provide References for Their Work? An Experiment By www.windley.com Published On :: Fri, 31 May 2024 08:32:29 -0400 Summary: We know that LLMs hallucinate. And we know that we can overcome some of that problem with better prompts. But can LLMs provide references for their work? I conducted an experiment. Recently my friend Scott Mace asked this on X in response to a post about the evolution of software and how LLMs will likely follow that same trajectory: Yes, but how trustworthy will it be? Can I at least see the links it used to create its response? From XReferenced 2024-05-21T18:09:06-0400 The short answer is "no", but the real answer is a bit more nuanced because of how LLMs like ChatGPT work. First the short answer: LLMs don't understand what they're saying so they can't reference the material that it used to create a response. They don't work like that. They don't go out, do a bunch of research and then create a response. Instead lots (and I mean LOTS) of material is fed into the program that creates the model, a multi-dimensional matrix of word orders. Then when you ask a question ("give it prompt" in LLM-speak), it processes the question and generates text based on the probabilities of words in its model1. There's no way it can satisfy Scott directly because the model doesn't—can't, as far as I know—include the references that it used to create the text because it's generating it on the fly from material that has been thoroughly blended together. But, in fairness, that's not what people do most of the time either. If you asked me over lunch or at the end of a tech presentation to describe zero trust and its pros and cons, I'd start talking. I wouldn't start looking up things on the web and somehow weave them into text complete with references. I'd go look for references if you asked me to back up something I said. So, let's try that with ChatGPT 4o. I gave it the following prompt: You are a writer for a website focusing on technology. Your specialty is security. Write a 150 word explanation of zero trust with three pros and three cons. ChaptGPT answered with this text: Zero Trust is a security model based on the principle of "never trust, always verify." Unlike traditional security models that assume everything inside the network is safe, Zero Trust assumes that threats can exist both inside and outside the network. It requires strict identity verification for every user and device attempting to access resources, regardless of their location. Pros: Enhanced Security: By continuously verifying users and devices, Zero Trust reduces the risk of breaches and unauthorized access. Minimized Attack Surface: It limits access to only what is necessary, reducing potential entry points for attackers. Improved Compliance: Helps meet regulatory requirements by ensuring strict access controls and logging all access attempts. Cons: Complex Implementation: Deploying Zero Trust can be complex and time-consuming, requiring a significant overhaul of existing systems and processes. Increased Costs: The initial investment in technology and training can be high. User Experience Impact: Continuous verification can lead to friction and inconvenience for users if not implemented smoothly. Zero Trust is essential for modern cybersecurity, but it requires careful planning and resources to implement effectively. That's pretty good. But what if we need some references to other material to back up the pros and cons? I asked ChatGPT to supply them: Now repeat that task, but this time include references as hyperlinks to back up each pro and con. ChaptGPT answered with this text: Zero Trust is a security model based on the principle of "never trust, always verify." Unlike traditional security models that assume everything inside the network is safe, Zero Trust assumes that threats can exist both inside and outside the network. It requires strict identity verification for every user and device attempting to access resources, regardless of their location. Pros: Enhanced Security: By continuously verifying users and devices, Zero Trust reduces the risk of breaches and unauthorized access (CSO Online). Minimized Attack Surface: It limits access to only what is necessary, reducing potential entry points for attackers (NIST). Improved Compliance: Helps meet regulatory requirements by ensuring strict access controls and logging all access attempts (Gartner). Cons: Complex Implementation: Deploying Zero Trust can be complex and time-consuming, requiring a significant overhaul of existing systems and processes (Forrester). Increased Costs: The initial investment in technology and training can be high (ZDNet). User Experience Impact: Continuous verification can lead to friction and inconvenience for users if not implemented smoothly (TechRepublic). Zero Trust is essential for modern cybersecurity, but it requires careful planning and resources to implement effectively. That looks great! Nice references for each bullet. There's just one problem. All the links, except the NIST reference, are dead. I don't know if they're wrong, made up, or stale. I tried finding content on a few of the sites by searching and came up dry. As I heard Johannes Ernst say once, ChatGPT is a "large language model" not a "large fact model" and this bears that out. Still, asking for references or evidence for statements is something people are going to want. So, I tried a different tack. This time I asked this: Why do you say complex implementation is a problem for zero trust? Can you give me a few references on that I can read myself? ChatGPT responded with this: Implementing a Zero Trust security model can be challenging for several reasons: Overall Complexity: Zero Trust requires a comprehensive understanding of your network's intricacies, including users, devices, applications, and data. This can involve significant changes to existing infrastructure, which can be disruptive and costly. Organizations need to be prepared for the extensive planning and resources required for successful implementation (Help Net Security) (Enterprise Networking Planet). Legacy Systems: Many organizations have legacy systems that are not designed to operate within a Zero Trust framework. These systems might require different security measures or even complete replacement, adding to the complexity and cost (Help Net Security) (Enterprise Networking Planet) Cultural Shift: Zero Trust is not just a technological shift but also a cultural one. It requires continuous verification and strict access controls, which can lead to user frustration and resistance if not managed properly. Ensuring user buy-in and minimizing disruption to workflows through training and communication is essential (Deloitte United States) (Guidehouse). Resource Intensive: Implementing and maintaining Zero Trust can put a strain on IT resources due to the need for continuous monitoring and management. The need for advanced security tools and ongoing updates further adds to the resource demands (Help Net Security) (Enterprise Networking Planet). Stakeholder Alignment: Achieving buy-in from all stakeholders and aligning them with the Zero Trust principles is critical. This requires a coordinated effort to educate stakeholders, prioritize gaps, and ensure consistent understanding and application of Zero Trust principles across the organization (Deloitte United States) (Guidehouse). These challenges highlight the importance of a phased and well-planned approach to Zero Trust implementation, considering both technological and human factors. For more detailed insights, you can refer to articles from Help Net Security and Guidehouse. Much better. The links in the bullets all work, but the ones at the end don't. The articles it references aren’t very detailed, so I wouldn’t say they’re stellar references, but at least there’s a pointer to outside material that discusses similar challenges. So, partial success. I've found that when you're using ChatGPT to do research, treating it like a librarian and asking questions that follow up are the best way to gain confidence that the material is correct and find the supporting material, just as I did above. I'm sure you could wrap additional helper functions around an LLM to make getting references easier. But for now interrogation through chat is the best way to get comfortable with the results. Notes Yeah, this is really simplified and thus wrong, but it's directionally correct. For a better explanation, see What Is ChatGPT Doing … and Why Does It Work?. Photo Credit: Older Man Talking to Robot Librarian from DALL-E (public domain) Prompt: draw me a rectangular picture of an older man talking with a robot librarian. Use earth tones. Tags: genai llm chatgpt prompt+engineering Full Article
llms Motiva Unveils Motiva Generator™ Private Label for Enterprises Seeking Tailored, Private LLMs By www.24-7pressrelease.com Published On :: Wed, 03 Jan 2024 08:00:00 GMT New product addresses Large Language Model data security and privacy, offering per-customer solutions - a first in MarTech. Full Article
llms Re: LLMs and refusals By seclists.org Published On :: Thu, 25 Jul 2024 20:37:35 GMT Posted by Jason Ross via Dailydave on Jul 25It's likely this is going to happen anyway, the new Mistral just dropped and seems to perform roughly on par with llama3 and gpt4o, so the next wave of fine tuned versions like dolphin are almost certainly coming soon. OpenAI also has announced free fine tuning of gpt4o mini until late September (up to 2m tokens/day) so it may be possible to fine tune around some of its guardrails for a reasonable cost. Full Article
llms Re: LLMs and refusals By seclists.org Published On :: Sun, 28 Jul 2024 21:20:00 GMT Posted by David Manouchehri via Dailydave on Jul 28Breaking down a prompt into multiple steps works pretty well for us. e.g. first we get generic mean reasons: [image: image.png] Then I just shove the mean reasons into the system message (you can do this with another LLM call instead in real life, I just cheated by copy pasting since there's already too many screenshots in this email): [image: image.png] This is with gpt-4o-2024-05-13 above, but you can see below it works with Llama 3.1... Full Article
llms sboms and LLMs By seclists.org Published On :: Wed, 11 Sep 2024 17:52:39 GMT Posted by Dave Aitel via Dailydave on Sep 11People doing software security often use LLMs more as orchestrators than anything else. But there's so many more complicated ways to use them in our space coming down the pipe. Obviously the next evolution of SBOMs <https://www.cisa.gov/resources-tools/resources/cisa-sbom-rama> is that they represent not just what is contained in the code as some static tree of library dependencies, but also what that code does in a summary fashion... Full Article
llms Re: sboms and LLMs By seclists.org Published On :: Thu, 12 Sep 2024 10:18:40 GMT Posted by Isaac Dawson via Dailydave on Sep 12Well this is rather timely! Although I'm not sure using an LLM for the behavioral aspect is entirely necessary. I've been working on an experimental system that does just what you talk about for dependencies ( https://docs.gitlab.com/ee/user/application_security/dependency_scanning/experiment_libbehave_dependency.html, pre-alpha!). My solution uses static analysis because I'm a fan of determinism. Snark aside, looking at behaviors... Full Article
llms Re: sboms and LLMs By seclists.org Published On :: Thu, 12 Sep 2024 20:19:48 GMT Posted by Adrian Sanabria via Dailydave on Sep 12We've been talking about and giving "Beyond the SBOM" presentations for a while now, but to your point, I don't see anyone actually doing it. If Solarwinds said "here's a script that will lock down your host firewall to just the outbound access our tools need to update themselves", that would be amazing, and would have saved everyone some time and trouble a few years ago. [image: image.png] And Biden's EO... Full Article
llms Hacking the Edges of Knowledge: LLMs, Vulnerabilities, and the Quest for Understanding By seclists.org Published On :: Sat, 02 Nov 2024 18:10:19 GMT Posted by Dave Aitel via Dailydave on Nov 02[image: image.png] It's impossible not to notice that we live in an age of technological wonders, stretching back to the primitive hominids who dared to ask "Why?" but also continually accelerating and pulling everything apart while it does, in the exact same manner as the Universe at large. It is why all the hackers you know are invested so heavily in Deep Learning right now, as if someone got on a megaphone at Chaos... Full Article
llms SE Radio 582: Leo Porter and Daniel Zingaro on Learning to Program with LLMs By se-radio.net Published On :: Wed, 20 Sep 2023 18:05:00 +0000 Dr. Daniel Zingaro and Dr. Leo Porter, co-authors of the book Learn AI-Assisted Python Programming, speak with host Jeremy Jung about teaching programming with the aid of large language models (LLMs). They discuss writing a book to use in Leo's introductory CS class and explore how GitHub Copilot de-emphasizes syntax errors, reduces the need to memorize APIs, and why they want students to write manual test cases. They also discuss possible ethical concerns of relying on commercial tools, their impact on coursework, and why they aren't worried about students cheating with LLMs. Full Article
llms CONFIRMED: LLMs have indeed reached a point of diminishing returns By garymarcus.substack.com Published On :: 2024-11-13T05:47:01+00:00 CONFIRMED: LLMs have indeed reached a point of diminishing returns https://ift.tt/e4hKjQ7 ai, llms, trends, investment Full Article
llms microsoft/BitNet: Official inference framework for 1-bit LLMs By github.com Published On :: 2024-11-13T08:20:27+00:00 Full Article
llms Everything I've learned so far about running local LLMs By nullprogram.com Published On :: 2024-11-13T08:32:40+00:00 Over the past month I’ve been exploring the rapidly evolving world of Large Language Models (LLM). It’s now accessible enough to run a LLM on a Raspberry Pi smarter than the original ChatGPT (November 2022). A modest desktop or laptop supports even smarter AI. Full Article
llms OSI Open AI Definition Stops Short of Requiring Open Data for LLMs By www.hpcwire.com Published On :: Wed, 06 Nov 2024 21:00:10 +0000 The movement toward open source AI made progress today when the Open Source Initiative released the first Open Source AI Definition (OSAID). While the OSAID provides one step forward, the […] The post OSI Open AI Definition Stops Short of Requiring Open Data for LLMs appeared first on HPCwire. Full Article Features Apache 2 ChatGPT Llama3 LLMs open data open models Open Source AI Definition OSAID OSI
llms DevOps, LLMs, and the Software Development Singularity By gigaom.com Published On :: Thu, 07 Nov 2024 16:17:08 +0000 A Brief History of DevOps To understand the future of DevOps, it’s worth understanding its past—which I can recall with a level The post DevOps, LLMs, and the Software Development Singularity appeared first on Gigaom. Full Article
llms LLMs’ Data-Control Path Insecurity By www.belfercenter.org Published On :: May 9, 2024 May 9, 2024 The comingling of data and commands means that large language models are vulnerable to manipulation by prompt injection. Full Article
llms Automated electrosynthesis reaction mining with multimodal large language models (MLLMs) By pubs.rsc.org Published On :: Chem. Sci., 2024, 15,17881-17891DOI: 10.1039/D4SC04630G, Edge Article Open Access   This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.Shi Xuan Leong, Sergio Pablo-García, Zijian Zhang, Alán Aspuru-GuzikLeveraging multimodal large language models (MLLMs) to process multimodal data inputs and complex inter-modality data dependencies for automated (electro)chemical data mining from scientific literature.The content of this RSS Feed (c) The Royal Society of Chemistry Full Article