In one case, you would be setting your thread pool based on 112, another just on 8 or 16.Īs the app writer, you don't have the control, but in the deployment scenario, you have to see. The answer to this, if you're running on a large system, it would be all the core – for example, this system is running 112 threads, it could be 112 threads – or, if you're running within a VM and that VM only gives you 16 threads or so, the answer would be 16. Many of you may be using the containers or the virtual machines and when you are setting the thread pool within your application, it depends on how many CPU threads that API call gets. Second part is not covered on that picture. That is one variation for the responsiveness, why that responsiveness could be different, but not the total throughput. Anytime you are sensitive to latency, that path can make a big difference. It will not get a result based on the application on the total throughput because we are talking memory latency of 100 nanosecond to 140 nanoseconds, but it can make a big difference on your response time, the latency. Now let me ask this question: between one and two, which one is going to be better performance? One, because the local memory, and that part could cause significant variation in certain cases. In third case, what happens is that you launch the process again in the second socket and it gets the local memory. These are the traditional two-socket system. In number two, you start your process, it starts on other socket and then it does not get its own local memory, it gets the memory from the other socket. The process will start on, let's say, the other end of the socket and it will get the local memory, so number one, you are running on a socket that's getting the local memory. You launch the process, it goes to the one of the socket – there is the two-socket system. The traditional environment could be your app and the JVM, it could be running in a container and from that container, when you launch a process, you could have number one case. These are some of the components we ended up running on our environment. We have created this proxy where that's what we are running." Ok, that helps us because we can run a similar proxy. He was using a benchmark which he said, "We run our system in productions, so we know the behavior and we have done the benchmark part and they pretty matched with regard to whether it is GC or CPU utilization or network I/O in most of the situations. The problem is, he said, "I cannot take anything from my production environment and have the confidence of repeatability running on the systems." Then, what I go run for traditional benchmark, etc., that we find. Very few of them are very large, we go up to 100 gig heap. We run them usually in two or three gig heap. They said, "I have around 3,000 to 4,000 applications, some of them are very small footprint." These are microservices that talk to each other and some of them are very small, like two gig. That lead me to ask the very first question to that person: "What are you running?" This is a pretty large company. Where my production ranges with regard to 30% to 40% operational, then I see this 60% to 70% variability from run to run and that start to gave me some idea, "What is going on here? Because we don't see such things." What Test Is Running? One is full capacity metric, when system is fully utilized, only in case everything else fail and only fuse system are carrying the load, then performance is very repeatable. I got this email from Dev one and a half month ago or so, "We are evaluating this Intel latest platform and I'm doing this runs and I'm getting a variability of almost 40%, 50%, 60% from run to run," and I'm, "No, I have not seen such things." You can see the two metrics he was talking about. My expertise comes more from the benchmarking area, performance and responsiveness. I've seen there are other talks about the monitoring part, so this track also does not cover the monitoring. Second one was the monitoring and observability, that have not been great in 7 or 8 and there have been significant improvement. I think there is a talk later in which I'm not that involved in licensing there are other experts. There were definitely three issues, one was related to what are the licensing terms.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |