Lesbarkeit vor Geschwindigkeit?
In wie fern ist schnellerer Programmcode signifikant komplexer/unlesbarer? Ich hätte dazu gerne ein paar Beispiele warum, da ich dem so nicht zustimmen kann.
Ich glaube es ist eher eine Frage wie man Programmcode gewohnt ist.
You should always remember that someone has to change/revise the code at some point. To do this, it is necessary that this person be able to cope with the code as quickly as possible.
This only means in rare exceptional cases that the code must be slower, just because it is written in a comprehensible manner. In doubt, there are comment lines that explain the programmer’s intention.
And there are a number of good practices: A variable is not called “v121ax” but “Control_Prozent”. And one calls a subroutine not “wgstrg” but gives her an understandable name.
And in some (older) languages there are constructs that you should not use. So in COBOL:
Whoever does this gives a beer to the colleagues!
Why is old bad?
“Old” is not bad! I didn’t write a couple of programs “for fun” in Cobol for a long time ago. The speed of execution was sensational. But the colleagues were horrified!
And this is the problem: the colleagues who can still Cobol are pensioners or in the cemetery. Too bad.
Why? Then you can’t have any hardware limits
Well – the life of people is limited.
I still learned Cobol, Fortran and Algol60. Today’s programmer generation has no idea. And I’m not getting younger.
Today a lot of code is “separated” in Java. Java is an extremely hardware and memory-intensive language (measures on C and assembler) But it is a language in which you get very fast results, so it is very popular.
Of course, a program code in a hardware-related language is faster than in a highly abstracted language. But no one is interested in this
Why is that?
Nowadays, the performance of computers and smartphones is so good that really fast code is no longer needed. So you put more value on clean code, which is simple and understandable. But there are cases where speed is more important. Let’s take Rust and C# as an example, where then unsafe blocks come quickly and Rust, C#, C/C++, … when you work with pointers. You can break something very quickly because the standard checks don’t really work on pointers. This means that security measures that are optimized for your own code must be implemented here and there. This can lead to a code overhead which is nevertheless faster. In principle, more code is more and more difficult to read because it is more. Especially with less experienced developers, this can lead to question marks. I could mention more examples, but that should be an example. Comments are of course always important. The compiler usually removes these, which is why they do not claim performance during the runtime. Comments are not almighty. A very complex code with comments remains complex and must be understood. Especially in the field of wartability, clean and easy to read code is important and compatible with today’s hardware.
Are you sure? Everything new seems bad to run.
What would be amazed but achneller? I have often heard and seen it, but I did not feel it myself.
Well, you have to distinguish between: I gossip a product and don’t bother me, making bugs come into the product and I’m trying to fix all bugs before the release. Even the most cleanly written code can be bad on modern hardware if you have accidentally installed bugs or other bottle necks.
If you were uncleaned, we take Rust and C# as an example when you work a lot with pointers to get speed as you access the memory directly. The disadvantage of the codeoverhead is then unsafe blocks, pointers (so much reference and their reference symbols) and more. In these languages there are extra these insafe blocks, because the direct unprotected work with the memory is fast, but uncertain and unclean. Many compiler optimizations do not, like security measures. Imagine when you create an array and work with it, everything is cleanly legible. Now you’re locating a storage area yourself and using it. The responsibility lies with you. The most necessary checks and validations then you need to install yourself, making the code more and unreadable
you can see my first question, I can’t even run Minecrafr right
So I personally also stand on machine code, so I have been working a lot lately with Rust ðŸTM‚
I don’t like this browser
Then why buy new computers, you can handle the limit by using a fast language. And you can get a donation account for those who don’t have enough power or money for faster code.
And 50mb? We need to get back to Machine Code, I can’t do it anymore
I won’t take you off. Even a 2 GB Ram Raspberry Pi can run complex applications
But Ixh can only run MachineCode, with me all websites are very slow
Simplicity, saving money. Many can use HTML and CSS, as well as JS. In addition, you don’t always need high performance and the hardware creates the right well nowadays
And why don’t you make it a program when they’re so slow?
A lot. Never heard of Electron, Tauri and co? See discord, teams and many other programs. These are sites, more not even if they look like a program.
What do web applications have to look for in desktop applications?
I didn’t talk about Web. C on the web has only become directly possible with WASM. But I talk about desktop applications
When was that time? I’ve never seen C in web development.
Define high language. High level languages can be anything. Even Rust and C++. Even if you like to call them a low level. In principle, high level languages are the languages that are higher than a low level language below. ASM would be the low level language and Rust/C++ the high level languages. And yes, languages like C#, Java, JS, etc., there are forever. Let’s take an example: Today’s web applications, built with Electron, Tauri, whatever, are right okay. You look modern, can do a lot and run on current hardware. At that time, C/C++ was preferred, as it simply ran better on the hardware
But high-level languages have been available for a long time and have been used for a long time
Also well readable code can have fast execution speeds. Both are not automatically excluded.
However, if one is now faced with the concrete decision whether one would formulate more complicated code which would run less quickly in a more readable form, then it would first be possible to weigh down how the intended advantage actually has a weighting for the final product.
If there are no really good reasons/needs for the more complicated solution, it should not be used.
Well-readable code has the advantage of being easier and more understandable. Other developers can therefore work with less time and make adjustments/expansions. The more complicated code is formulated, but the greater is the probability that mistakes are implied. Be it in initial development or later in maintenance / extensions. This can be the basic stone for an unstable end product.
A classic example is Duff’s Device for optimizing loops. This Kniff is not so easy to see at first glance and could even be classified by some developers as a syntax error. For most applications with loops it is unnecessary, especially as current compilers implement their own techniques for rolling loops.
As a second (small) example, iteration could be performed via an array via iterator. Tendentiell would be a classic for– Go faster. The difference, however, is too small as it would fall into weight.
More generally, it is also possible to consider the use of languages in this context, because, where possible, simpler languages often displace those which are more difficult to access.
A good example is the application field of development of desktop applications. C++ has been pushed back by languages such as C#, Delphi, Java, TypeScript or Visual Basic over the years.
One more example is Python or JavaScript, which have spread more strongly in various applications (e.g. ML, Web, IoT). Only the really time-critical requirements are still impleted in a language like C.
Over and over again, one finds divisions in software projects where one tries to remove complexity:
All these cases are examples where one has opted for simplicity/better readability, instead of a better execution speed.
If program code is not written readable, it becomes worse and worse to expand more dense and heavier.
So you might have to rewrite it in the end.
There are hardly any applications where speed is so important that readability (strong) should be reduced.
Why is legibility reduced at speed?
Because you don’t think like a computer, but in abstractions.
For the computer, “tu X 100 times” is more complicated than simply adding a hundred times X, as it must count itself, must jump and break out.
But if you have a file with tens of thousands of commands repeated hundreds of times before you, you will read more.
Yes, that’s the same. You want me to read the program code.
However, the commands must be read and interpreted/compiled, depending on the language. However, one could vaguely tell the compiler dss the function is critical and let the loop write out in the final result
That’s not the same
No, more code does not automatically mean more execution time, commands need different lengths.
Take a book, and copy every set of thousands of paintings. Try to read it.
He’s got to load the code or not? More code means more time for execution or not.
Why?
That doesn’t exclude the other. It depends on what the compiler does with the input.
I also see where it would be different? Many seem to hate a certain style and then say it is unreadable.
Also at the end depends on how the binary code looks.
Higher programming languages are excellent in compiling different formulations identically. Look at the IL code that comes when you translate C# code. Different formulations of, for example, loops are derived into identical structures which always look the same.
LG Knom