

Maybe all of DOGE was about finding Epstein files content, and failed.
And now that they have been released, Musk realises there is no kompromat on him so he can recover some PR points or something


Maybe all of DOGE was about finding Epstein files content, and failed.
And now that they have been released, Musk realises there is no kompromat on him so he can recover some PR points or something


Such is “being rich and famous”.
Nobody on earth is “pure”.
Some people will do anything for themselves. This is how billionaires and monsters are made. They are ALL bad


Scott Manley has a video on this:
https://youtu.be/DCto6UkBJoI
My takeaway is that it isn’t unfeasible. We already have satellites that do a couple kilowatts, so a cluster of them might make sense. In isolation, it makes sense.
But there is launch cost, and the fact that de-orbiting/de-commissioning is a write-off, and the fact that preferred orbits (lots of sun) will very quickly become unavailable.
So there is kinda a graph where you get the preferred orbit, your efficiency is good enough, your launch costs are low enough.
But it’s junk.
It’s literally investing in junk.
There is no way this is a legitimate investment.
It has a finite life, regardless of how you stretch your tech. At some point, it can’t stay in orbit.
It’s AI. There is no way humans are in a position to lock in 4 years of hardware.
It’s satellites. There are so many factors outside of our control that (beyond launch orbit success), that there is a massive failure rate.
It’s rockets. They are controlled explosives with 1 shot to get it right. Again, massive failure rate.
It just doesn’t make sense.
It’s feasible. I’m sure humanity would learn a lot. AI is not a good use of kilowatts of power in space. AI is not a good use of the finite resource of earth to launch satellites (never mind a million?!). AI is not a good reason to pullute the “good” bits of LEO


Yeh, do: 60fps, 30 bit color… and I guess HDR?
Do things that people can actually appreciate.
And do them in the way that utilises the new tech. 60fps looks completely different from 24fps… Work with that, it’s a new media format. Express your talent


I’d take each of your metrics and multiply it by 10, and then multiply it by another 10 for everything you haven’t thought about, then probably double it for redundancy.
Because “fire temp” is meaningless in isolation. You need to know the temperature is evenly distributed (so multiple temperature probes), you need to know the temperature inside and the temperature outside (so you know your furnace isn’t literally melting), you need to know it’s not building pressure, you need to know it’s burning as cleanly as possible (gas inflow, gas outflow, clarity of gas in, clarity of gas out, temperature of gas in, temperature of gas out, status of various gas delivery systems (fans (motor current/voltage/rpm/temp), filters, louvres, valves, pressures, flow rates)), you need to know ash is being removed correctly (that ash grates, shakers, whatever are working correctly, that ash is cooling correctly, that it’s being transported away etc).
The gas out will likely go through some heat recovery stages, so you need to know gas flow through those and water flow through those. Then it will likely be scrubbed of harmful chemicals, so you need to know pressures, flow rates etc for all that.
And every motor will have voltage/current/rpm/temperature measurements. Every valve will have a commanded position and actual position. Every pipe will have pressure and temperature sensors.
The multiple fire temperature probes would then be condensed into a pertinent value and a “good” or “fault” condition for the front panel display.
The multiple air inlet would be condensed into pertinent information and a good/fault condition.
Pipes of a process will have temperature/pressure good/fault conditions (maybe a low/good/over?)
And in the old days, before microprocessors and serial communications, it would have been a local-to-sensors control/indicator panel with every reading, then a feed back to the control room where it would be “summarised”. So hundreds of signals from each local control/indicator panel.
Imagine if the control room commanded a certain condition, but it wasn’t being achieved because a valve was stuck or because some local control over-rode it.
How would the control room operators know where to start? Just guess?
When you see a dangerous condition building, you do what is needed to get it under control and it doesn’t happen because…
You need to know why.


What the fuck is a french fry? You mean Freedom Fries?


Yeh, either proxy editing (where it’s low res versions until export).
Or you could try a more suitable intermediary codec.
I presume you are editing h.264 or something else with “temporal compression”. Essentially there are a few full frames every second, and the other frames are stored as changes. Massively reduces file size, but makes random access expensive as hell.
Something like ProRes, DNxHD… I’m sure there are more. They store every frame, so decoding doesn’t require loading the last full frame and applying the changes to the current frame.
You will end up with massive files (compared to h.264 etc), but they should run a lot better for editing.
And they are lossless, so you convert source footage then just work away.
Really high res projects will combine both of these. Proxy editing with intermediary codecs


What I’d recommend is setting up a few testing systems with 2-3GB of swap or more, and monitoring what happens over the course of a week or so under varying (memory) load conditions. As long as you haven’t encountered severe memory starvation during that week – in which case the test will not have been very useful – you will probably end up with some number of MB of swap occupied.
And
[… On Linux Kernel > 4.0] having a swap size of a few GB keeps your options open on modern kernels.
And finally
For laptop/desktop users who want to hibernate to swap, this also needs to be taken into account – in this case your swap file should be at least your physical RAM size.


I’ve been using EndeavourOS for 12 months now.
Very light steam gaming. Office stuff is basically web browsers (occasionally I have to swap to windows boot for silly excel spreadsheets that don’t work online). Programming is delightful.
It’s been solid, and the installer was great.
The major issues have been from dual booting windows (disable fast boot!) and from not updating frequently enough (keychain issues, tho endeavouros has plenty of “newb needs to update” helpers).
I love it. It’s mine, I own that laptop, and endeavouros works for me. I feel so much more in control than I ever did on windows.
I do have some basic experience running Debian servers (VMs for single service, or docker stuff), and I do programming.


I did this my my new pixel 8 pro. I loved it.
It was so easy, it worked, I was in control of my device.
Contactless payment didn’t work.
Which is a deal breaker for me.
I looked at some fin-tech solutions, I even bought a pixel watch (which didnt work because I have a workspace account). None of them let me work around the issue. Contactless just wouldn’t work.
Had to go back to stock android.
I’m constantly checking in on their attribution/verification/whatever status that would allow them to offer contactless payment (currently offered by android/apple/banks, but no open source software).
I want grapheneos and contactless so badly!


FCKGW?


In my experience, a Scheduler is something that schedules time on the CPU for processes (threads).
So 10 processes (threads) say “I need to do something”:
2 of those threads are “ready to continue” because they were previously waiting on some Disk IO (and responsibly released thread control while data was fetched).
1 of the threads says “this is critical for GPU operations”.
1 of those threads self declares it is elevated priority.
The scheduler decides which of those threads actually gets time on an available CPU core to be processed.


In simple terms, this means that the image is now built so that it produces exactly the same result every time. If the image is rebuilt later using the same source, it will be identical down to the last bit.


Wtf is GNU/Linux? You mean SystemD/KDE?


Something about porn leading the way, something about DVDs winning, something about VHS winning.
All of that doesn’t matter.
Because Linux desktop (in my experience, KDE Plasma and Wayland) along with distros that do sensible things (I use EndeavourOS btw) are just SO much better that Windows.
I only boot windows for software I have to run windows with fullscreen or GPU based software that doesn’t exist on linux
sudois a command that “does” something as “super user”
Fun fact, it originally stood for “superuser do”, however it now stands for “substitute user do” as it can “do” as any user - it’s just that the default user argument is root (IE super user)


Pretty sure all ram manufacturers are Korean? I guess China puts chips on PCBs, maybe? But South Korea has the knowledge .
And it had met domestic demand. RAM prices have been acceptable for many many years.
It’s the AI sector that is inflating demand (maybe by circular investment and contracts).
So, I don’t see anyone investing 10 years into the future to make ddr6 ram where their business plan relies on current trends.


It must take so much R&D to achieve anything remotely comparable to what Samsung, Micron (/Crucial… RIP) and SK Hynix can produce.
Fingers crossed they can either undercut the 3(now 2) big producers, which is doubtful. But hopefully they can help reduce the maximum price that decent memory can inflate to. Because at some point a medium sized customer is gonna get fed up of the Samsung/micron/skHynix bullshit, and custom order the ram they need, and such a smaller producer will provide a much better service for a similar price


Only for multi CPU mobos (and that would be pinning a thread to a CPU/core with NUMA enabled where a task accessed local ram instead of all system ram). Even then, I think all ram would run at the lowest frequency.
I’ve never mixed CPUs and RAM speeds. I’ve only ever worked on systems with matching CPUs and ram modules.
I think the hardware cost and software complexity to achieve this is beyond the cost of “more ram” or “faster storage (for faster swap)”
Nah, they have a cellular data connection.
It pays for itself, because the car manufacturer can sell the driving data to insurance companies.
And now it’s used to make sure your brakes subscription is up to date