Confused what you mean. OpenAPI has nothing to do with JS.
Confused what you mean. OpenAPI has nothing to do with JS.
There are a number of blue cities in the Midwest. What’s the lowest temp you want? I live in Lincoln, Nebraska and it’s pretty great: nice weather most of the year, low cost of living, blue city, tons of parks. Only downside is dealing with red state bullshit from the state government.
Pretty dumb, honestly. If anything it just adds a Streisand effect to it as people try to figure out what’s censored.
Not that censoring it has any value whatsoever. Like if a child sees that, so fucking what?
Yeah it’s not all that uncommon in school, just increasingly uncommon in industry.
Visual… programming languages? Yikes.
Yep. Postgres is fantastic and there’s no justification to use proprietary bullshit like that.
VMware workstation supports using a GPU. You can even use it in “pass-through” mode to give the VM full, exclusive access to the GPU.
Databases aren’t related to VMs or containers.
https://en.m.wikipedia.org/wiki/Database does a good job of describing what a database is. That page also has a lot of examples of uses of databases.
To answer your question about MySQL: in my experience it’s rarely used outside of classrooms or archaic systems. Postgres is a much better general-purpose option for SQL. Sqlite is also nice for different use cases (such as a database on a mobile device).
I used to work for a company that made software built on VMware. The biggest customer was using hundreds of thousands of VMs. Pretty sure they’re working on moving off VMware now because of all this bullshit.
But yeah, it’s gonna take a long time to move off.
If you mean for programming specifically, I… don’t, really. At most it would be for a quick sanity check on syntax in a language I don’t write often, for which Google is fine. But otherwise I rely on documentation and search features of the various language/tool-specific websites.
No, you divide work so that the majority of it can be done in isolation and in parallel. Testing components together, if necessary, is done on integration branches as needed (which you don’t rebase, of course). Branches and MRs should be small and short-lived with merges into master happening frequently. Collaboration largely occurs through developers frequently branching off a shared main branch that gets continuously updated.
Trunk-based development is the industry-standard practice at this point, and for good reason. It’s friendlier for CI/CD and devops, allows changes to be tested in isolation before merging, and so on.
Sure… That"s what libraries are for. No one hand-rolls that stuff. You can do all of that just fine (and, actually, in a lot less code, mostly because Java is so fucking verbose) without using the nightmare that is Spring.
I know it’s a joke, but just wanted to say that Uranium used for fuel is not something you can actually use for weaponry directly. It requires enrichment to increase the concentration of U-235 to weapons-grade levels.
You do not understand how these things actually work. I mean, fair enough, most people don’t. But it’s a bit foolhardy to propose changes to how something works without understanding how it works now.
There is no “database”. That’s a fundamental misunderstanding of the technology. It is entirely impossible to query a model to determine if something is “present” or not (the question doesn’t even make sense in that context).
A model is, to greatly simplify things, a function (like in math) that will compute a response based on the input given. What this computation does is entirely opaque (including to the creators). It’s what we we call a “black box”. In order to create said function, we start from a completely random mapping of inputs to outputs (we’ll call them weights from now on) as well as training data, iteratively feed training data to this function and measure how close its output is to what we expect, adjusting the weights (which are just numbers) based on how close it is. This is a gross simplification of the complexity involved (and doesn’t even touch on the structure of the model’s network itself), but it should give you a good idea.
It’s applied statistics: we’re effectively creating a probability distribution over natural language itself, where we predict the next word based on how frequently we’ve seen words in a particular arrangement. This is old technology (dates back to the 90s) that has hit the mainstream due to increases in computing power (training models is very computationally expensive) and massive increases in the size of dataset used in training.
Source: senior software engineer with a computer science degree and multiple graduate-level courses on natural language processing and deep learning
Btw, I have serious issues with both capitalism itself and machine learning as it is applied by corporations, so don’t take what I’m saying to mean that I’m in any way an apologist for them. But it’s important to direct our criticisms of the system as precisely as possible.
It’s got nothing to do with capitalism. It’s fundamentally a matter of people using it for things it’s not actually good at, because ultimately it’s just statistics. The words generated are based on a probability distribution derived from its (huge) training dataset. It has no understanding or knowledge. It’s mimicry.
It’s why it’s incredibly stupid to try using it for the things people are trying to use it for, like as a source of information. It’s a model of language, yet people act like it has actual insight or understanding.
Technically “to eat” is the Infinitive form of the verb, and using infinitives as nouns isn’t all that unusual in many languages.
The tooling has improved dramatically since then. There’s now a full-fledged language server (https://haskell-language-server.readthedocs.io/en/stable/), ghcup
(https://www.haskell.org/ghcup/) is now a thing for installing/managing different versions of GHC/cabal/HLS, there’s now formatters (https://github.com/tweag/ormolu) and cabal has modernized significantly and supports multi-package projects much more comfortably now. Nix-based Haskell infrastructure is also now pretty nice. There’s even stuff like https://github.com/srid/haskell-template/blob/master/flake.nix to very quickly get spun up on a new project using Haskell and nix, including vscode, formatter, HLS, and a full development shell with a bunch of useful commands.
Another great modern thing (which powers HLS) is that GHC can now emit .hie
files for each file it compiled, which is basically a standardized representation of the AST for that module that can be consumed/manipulated programatically. Lots of tools can use this. One such tool that’s particularly useful is https://github.com/wz1000/HieDb, which constructs an sqlite database from the information in these files, so you basically can have an index of every symbol definition, reference, export, etc. all readily available to use however you want.
https://www.shellcheck.net/ is probably one of the most well-known.
https://simplex.chat/ is written entirely in Haskell.
https://pandoc.org/ is another big one.
https://serokell.io/blog/best-haskell-open-source-projects has a (non-exhaustive) list of a bunch more.
Haskell. It’s a fantastic language for writing your usual run of the mill DB-backed web APIs (and a bunch of other stuff like compilers, data processing, CLIs, even scripting) and can do a lot of things that other languages simply can’t (obviously not in terms of computation, but in terms of what’s possible with the type system).
I’ve been writing it professionally for a while and am very happy with it. Would be nice if the job market for it was a bit broader. You can definitely get jobs doing it, you just don’t have quite as broad of a pool to choose from.
Only if using JSON merge patch, and that’s the only time it’s acceptable. But JSON patch should be preferred over JSON merge patch anyway.
Servers should accept both null and undefined for normal request bodies, and clients should treat both as the same in responses. API designers should not give each bespoke semantics.