• 0 Posts
  • 42 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2024

help-circle





  • Are there any risks or disadvantages to building software from source, compared to installing a package?

    Well, compiling from source is the “installing dodgy freeware .exe” of the Linux world. You have to trust whoever is distributing that particular version of the source code, and ideally vet it yourself. When installing a binary package from your distro’s repositories, presumably someone else did the vetting for you already. Another slight risk is that technically you are running some extra build scripts before you can even run the application, which is a slight security risk.

    Can it mess with my system in any way?

    Yeah, unless you take precautions and compile in a container or at least a sandbox, the build scripts have complete unadulterated access to your user account, which is pretty much game over if they turn out to be malicious (see: https://xkcd.com/1200). Hopefully most FOSS software is not malicious, but it’s still a risk.

    If you “install” the software on your system, it also becomes difficult to uninstall or update, because those files are no longer managed from any centralized location.

    I recommend using a source-based package manager, and package your software with it (typically won’t be any more difficult than just building from source) to mitigate all of those (as typically source-based PMs will use sandboxing and keep track of the installed files for you).


  • All x86_64 CPUs support a certain “base” set of instructions. But most of them also support some additional instruction sets: SIMD (single instruction multiple data - operations on vectors and matrices), crypto (encryption/hashing), virtualization (for running VMs), etc. Each of those instructions replaces dozens or hundreds of “base” instructions, speeding certain specific operations dramatically.

    When compiling source code into binary form (which is basically a bunch of CPU instructions plus extra fluff), you have to choose which instructions to use for certain operations. E.g. if you want to multiply a vector by a matrix (which is a very common operation in like a dozen branches of computer science), you can either do the multiplication one operation at a time (almost as you would when doing it by hand), or just call a single instruction which “just does it” in hardware.

    The problem is “which instruction sets do I use”. If you use none, your resulting binary will be dogshit slow (by modern standards). If you use all, it will likely not work at all on most CPUs because very few will support some bizarre instruction set. There are also certain workarounds. The main one is shipping two versions of your code: one which uses the extensions, the other which doesn’t; and choosing between them at runtime by detecting whether the CPU supports the extension or not. This doubles your binary size and has other drawbacks too. So, in most cases, it falls on whoever is packaging the software for your distro to choose which instruction sets to use. Typically the packager will try to be conservative so that it runs on most CPUs, at the expense of some slowdown. But when you the user compile the source code yourself, you can just tell the compiler to use whatever instruction sets your CPU supports, to get the fastest possible binary (which might not run on other computers).

    In the past this all was very important because many SIMD extensions weren’t as common as they are today, and most distros didn’t enable them when compiling. But nowadays the instruction sets on most CPUs are mostly similar with minor exceptions, and so distro packagers enable most of them, and the benefits you get when compiling yourself are minor. Expect a speed improvement in the range of 0%-5%, with 0% being the most common outcome for most software.

    TL;DR it used to matter a lot in the past, today it’s not worth bothering unless you are compiling everything anyways for other reasons.





  • balsoft@lemmy.mltoProgrammer Humor@lemmy.mlEvery time
    link
    fedilink
    arrow-up
    31
    ·
    edit-2
    4 days ago

    When I started working with English-speaking people it was genuinely a bit of a culture shock that everyone asks you “how are you doing” all the time. The first time it happened I spent like a minute quickly going over my week. The other person was surprised/annoyed and it all was kind of awkward. It took me like two weeks to finally start answering “goodthankswhataboutyou” instead of trying to think of a real answer.




  • Windows, starting with 8, is inherently hostile to its users in ways that are very difficult or impossible to mitigate. It’s a black box of complicated machinery, a lot of which is trying to spy on you, steal your data, show you ads, upsell you on their stupid cloud services so that they can steal more of your data, etc. At this point, disabling all of this is really difficult and unreliable.

    Linux on the other hand is like a box of spare parts that you can build whatever you want from. You really do need to read the manual, or else whatever you build will look and work like shit. However, if you do build something good, it’s yours now in a way that a proprietary OS never will be.



  • Internet (via your smartphone) provides you with the ability to find any book, magazine or paper on any subject you want, for free (if you’re willing to sail under the right flag), within seconds. Of course noone has a full bookshelf anymore, the only reason to want physical books nowadays is sentimentality or some very specific old book that hasn’t been digitized yet (but in that case you won’t have it on your bookshelf and will have to go to the library anyway). The fastest and most accurate way of doing research today is getting a gist on Wikipedia, clicking through the source links and reading those, and combing through arxiv and scihub for anything relevant. If you are unfamiliar with the subject as a whole, you download the relevant book and read it. Of course noone wants to comb through physical books anymore, it’s a complete waste of time (provided of course they have been digitized).


  • They stopped doing research as it used to be for about 30 years.

    Was it really “like that” for any length of time? To me it seems like most people just believed whatever bullshit they saw on Facebook/Twitter/Insta/Reddit, otherwise it wouldn’t make sense to have so many bots pushing political content there. Before the internet it would be reading some random book/magazine you found, and before then it was hearsay from a relative.

    I think that the people who did the research will continue doing the research. It doesn’t matter if it’s thru a library, or a search engine, or Wikipedia sources, or AI sources. As long as you know how to read the actual source, compare it with other (probably contradictory) information, and synthesize a conclusion for yourself, you’ll be fine; if you didn’t want to do that it was always easy to stumble upon misinfo or disinfo anyways.

    One actual problem that AI might cause is if the actual scientists doing the research start using it without due diligence. People are definitely using LLMs to help them write/structure the papers ¹. This alone would probably be fine, but if they actually use it to “help” with methodology or other content… Then we would indeed be in trouble, given how confidently incorrect LLM output can be.