On 12/23/11 15:47, Martin Sugioarto wrote: > Am Fri, 23 Dec 2011 11:18:03 +0200 > schrieb Daniel Kalchev <daniel_at_digsys.bg>: > >> The -RELEASE things is just a freeze (or, let's say tested freeze) of >> the corresponding branch at some time. It is the code available and >> tested at that time. > > Hi Daniel, > > obviously performance is not a quality aspect, only stability. > >> FreeBSD is not a distribution. It also compiles with the latest >> compiler >> - LLVM. :) > > I thought that the "D" in FreeBSD stands for "distribution". Yes, it's > ok that it compiles with LLVM. Does it also run faster in benchmarks? > >> I find it amusing, that people want everything compiled with GCC 4.7, >> which is still very much developing, therefore highly unstable and >> (probably) full of bugs. > > When you don't use the software don't complain that it is buggy, > because you won't find the bugs. You cannot always tell the others to > make everything perfect. As with GCC4.7, CLANG/LLVM is still considered "experimental" and definitely has some issues with CPU architectures beyond Core2. Personally, I compile everthing now with CLANG on FreeBSD 9.0/10.0 as far as I don't realise any conerns towards correctness and stability. Well, the GCC 4.7 came somewhere up and I picked it up, sorry. It is much easier to replace gcc 4.7 in this thread by 4.6.2, which is now considered stable and in production. And as some of the writers in this thread mentioned, the performance gain could be enormous since gcc 4.6 does support either core i7 architectures and its new facilities, the optimizer is aware of the core/uncore design an, maybe, of the three-folded cache levels. Is the legacy gcc 4.2 aware of that? I guess not, since it does not support architectures beyond Core2. I tried using gcc 4.6.2 from ports to compile world, but I failed. Simply replacing/setting CC, CXX and CPP isn't obviosuly enough. > > I don't want to have everything compiled on $COMPILER. I want that > there is a reasonable quality. And for me quality is not only > stability, but also speed. Yes, agree. I think quality could inherit also a reasonable speed. Speed at all costs, even stability, is no option. Even for HPC systems, where jobs run uninetrupted for weeks or months (in our case). > >> Many suggested that the Linux binaries be run via the FreeBSD Linux >> emulation. Unchanged. >> There is one problem here though, the emulation is still 32 bit. With the usage of even 32bit Linux binaries you introduce all the mess you want to avoid by using FreeBSD. But it is very often recommended to use the so called Linuxulator. I'm happy to have this opportunity (I can not run FreeBSD binaries on some Ubuntu or Centos distros). But in some cases people of the FreeBSD community rely to much on this 32bit-limited option. I always prefer native BLOBs over emulated BLOBs. > > I'm not talking about emulation. I don't use FreeBSD to run emulated > binaries. I (any many people) want efficient servers and eventually > desktops. You should not expect people to tune the system for speed, > when it's clear that default setting does not make any sense. People > will use default settings, because they trust developers that they > thought about balanced stability, security and performance. > >> FreeBSD has safe default. > > This is what I am talking about. Don't complain that the benchmark does > not show efficience. No one is interested in tuning FreeBSD just for a > benchmark application. > >> It is supposed to work out of the box on >> whatever hardware you put it. As much as it has drives for that >> hardware, of course. >> Once you have working installation, you may tweak it all the way you >> wish. > > But if you don't tweak, you get a fair result in a benchmark. This is > what you will see as a user of the system. These are the default > settings, that means developers chose them as the BEST choice for the > system. Well, it is a very nice moce to have conservative settings to make FreeBSD stable for everyone intend to use it out of the box. But what I really miss is a certain, group of people dedicated to HPC and secure, stable tweak achieving that. The operating system is a nature and live. It is a balance of a limited resource. One can try to balance out every potential workload that can occur and the result is a very good allround syste, But in the server or HPC area, it might be necessary to push some parts in favor of some others. When computing, I do not need high USB performance, except a responsive keyboard. I/O and CPU performance is the main goal, but this seems the most difficult part. A file or network server, for instance, would balance more towards network I/O or delivering small data pieces instead of large streaming blocks of memory. I'm certain that the tweaks would differ for both scenarios. At home or at the desktop, the situation is more complicated, since people tend to use a lot of multimedia stuff and jumping audio is also not a very pleasant thing as stuck video. > >> If your installation is pre-optimized, chances are it will crash all >> the time on you and there will be no easy way for you to fix, short >> of installing another "distribution". > > Sorry, no. If optimization makes bugs appear, there are bugs in the > code (somewhere). And you will never find them when you hide them like > this. You will also never see many advances in performance. > > -- > Martin Agree. Benchmarking could push a system towards its limits and reveal limits or bugs that make them crash. It is better to have them crash in a benchmark torture than in a data center delivering valuable data for business or in science, when the server crashes just before finishing a two month run due to some buffer problems ... oh
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:22 UTC