• 8 Posts
  • 68 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle

  • Z4rK@lemmy.worldtomemes@lemmy.worldGreat job
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    Alas. They have said they plan to open some of the source and potentially everything, but it’s little progress.

    They recently ported to Linux, which I think will give them much more negative feedback here, so hopefully with more pressure they’ll find the correct copy left license and open up their source to build trust.


  • Z4rK@lemmy.worldtomemes@lemmy.worldGreat job
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    7 months ago

    There are two modes of AI integrations. The first is a standard LLM in a side panel. It’s search and learning directly in the terminal, with the commands I need directly available to run where I need them. What you get is the same as if you used ChatGPT to answer your questions, then copied the part of the answer you needed to your terminal and run it.

    There is also AI Command Suggestion, where you’ll start to type a command / search prefixed by # and get commands directly back to run. It’s quite different from auto-complete (there is very good auto-complete and command suggestion as well, I’m just talking about the AI specific features here).

    https://www.warp.dev/warp-ai

    It’s just a convenient placement of AI at your fingertips when working in the terminal.


  • Z4rK@lemmy.worldtomemes@lemmy.worldGreat job
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    Warp.dev! It’s the best terminal I’ve used so far, and the best use of AI as well! It’s extremely useful with some AI help for the thousands of small commands you know exist but rarely uses. And it’s very well implemented.











  • All these examples are not just using stable diffusion though. They are using an LLM to create a generative image prompt for DALL-E / SD, which then gets executed. In none of these examples are we shown the actual prompt.

    If you instead instruct the LLM to first show the text prompt, review it and make sure the prompt does not include any elephants, revise it if necessary, then generate the image, you’ll get much better results. Now, ChatGPT is horrible in following instructions like these if you don’t set up the prompt very specifically, but it will still follow more of the instructions internally.

    Anyway, the issue in all the examples above does not stem from stable diffusion, but from the LLM generating an ineffective prompt to the stable diffusion algorithm by attempting to include some simple negative word for elephants, which does not work well.







  • Yeah, you may be fine with the $5 plan, but that’s the lowest tier available.

    Afaik they are not really running a profit yet, just expanding, so that’s an eye opener to how expensive it is to run a search business and how much value Google and others estimate they get from your personal information.

    For now though their user base seems fairly much leaning towards business users that can defend this expense as part of becoming more effective professionally. Hopefully over time they’ll grow large enough to provide cheaper plans for regular persons while staying privacy focused and ad free.