![]() Our trick back then was to jump to black and white when we needed to get things done quickly, then back to higher bit depths when we wanted color or eye candy. When you take System 7, and put it on a 16MHz 68020 with half a memory bus (Mac LC), and run it at 16-bit color, painting those windows takes a long time. The original Macintosh feels snappy, but it is doing one process in black and white. You can get a good feel for what is really slow by using a very slow computer. But if the other frameworks weren't wasting clock time, they could feel plenty fast. They imagine that the UI code must be doing an incredible amount of work to feel so slow, but then they watch a similar IMGUI app build and display the whole UI every frame 60-120 times per second, with plenty processing power left over. I think this is one reason why people are so impressed with IMGUI. Hidden serialization of asynchronous processes, essentially causing tiny pauses throughout the main thread. For example, modify one thing, causing layout, moving something else, causing layout, moving something else, causing layout, etc., etc., until it is all recalculated and re-laid out, and then allowing the paint.Ģ. Bad abstraction, causing the moral equivalent of N+1 queries in UI code. I feel that slowness creeps into frameworks two ways.ġ. Other times it came down to differing defaults for things like cache modes (write-through or write back, etc), power-saving options, and so forth, which again could be tweaked with config (though the discoverability of these config options was typically not very high).Įven a lot of the new APIs are quite fast. Generic drives in Windows often had the same problems, but manufacturers tended to make device-specific drivers readily available (usually in the box) for those OSs. ![]() Sometimes it was due to the generic driver not knowing for sure that a given device supported faster modes well so erring on the side of caution, a not uncommon example being a drive/controller combination ending up running in PIO mode or an old DMA mode despite supporting something much faster, in which case you could get the performance back with a little “magic” configuration manually telling it to use that better mode. It could be very hit-and-miss, with two otherwise very similar machines performing quite differently due to one controller on the motherboard. That was often due to driver support, some hardware not performing as quickly (or sometimes not being as stable) under generic OSS drivers compared to their behaviour with the manufacturer's proprietary binaries (which were quite likely not available for Linux at all). > I used a dual boot configuration back in the day and found the Windows experience a lot more efficient than the Linux desktop then. And then going on a rant about debug symbols and ELF headers (which brings a ton of benefits itself). You're comparing base software built by decently-well educated engineers that does inordinately more than the comparison set, by so much moreso that it's ridiculous on its face. But we're not comparing some half-ass coded Electron app to some sleek handcoded C/C++/Rust desktop app. I fully acknowledged software bloat is a thing. At a much degraded image fidelity, color quality, insecure, primitively multitasking and non-networked manner with heavy RAM and CPU constraints. It can do somethings similarly, if you squint appropriately. Your Amiga with workbench is nowhere comparable to modern hardware+OSes. This is just some old guy "bah humbug" rant/conspiracy. But the reason for both of those reasons is simply that memory isn't scarce so we are lazy/efficient. Part of it is that no one bothers stripping symbols. Zlib - in-built compression functionality utilized by gzip, png and othersĬan you point to a base install of workbench being able to do all of that? About the only thing in the alpine base layout that it is directly comparable to is BusyBox+bash. ![]() Musl - a libc runtime and it's standard library Onigurama - a full regular expression library for use in other programs (language VMs like Ruby, for instance) Ssl - a full suite of cryptographic libraries and keys to allow secure communications and integration into other libraries/code (the aforementioned curl, for instance) Sure, and you can see all of the contents of that here:Ĭurl - a library that can handle full bidirectional HTTP communication in Unicode, including via SSL/TLS and arbitrarily manage file streams *or* utilize linux's built-in piping/redirection functionality The fact that it supports virtualizing an entire other OS in a safe and privileged manner should just further reinforce why the kernel is larger. Only if you run it on Linux through para virtualization in which case it's using the host's kernel. ![]() An alpine docker container doesn't have the kernel, so all that hardware support
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |