Skip to content

Systeminformation.graphics This function will cause high CPU utilization #932

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
BlockCnFuture opened this issue Sep 6, 2024 · 10 comments

Comments

@BlockCnFuture
Copy link

Describe the bug
When calling the Systeminformation.graphics function, especially when calling it within an interval, you will see high CPU utilization.

To Reproduce
Steps to reproduce the behavior:

  1. used function 'Systeminformation.graphics' in Interval, call it every two second

Current Output
This function will cause high CPU utilization

Expected behavior
Should not cause high CPU utilization

Environment (please complete the following information):

  • systeminformation package version: 5.22.4
  • OS: Windows 11
  • Hardware HP15L

To get all needed environment information, please run the following command:

  ┌─────────────────────────────────────────────────────────────────────────────────────────┐
│  SYSTEMINFORMATION                                                      Version: 5.22.4 │
└─────────────────────────────────────────────────────────────────────────────────────────┘

Operating System:
──────────────────────────────────────────────────────────────────────────────────────────
Platform         : Windows
Distro           : Microsoft Windows 11 专业版
Release          : 10.0.22631
Codename         :
Kernel           : 10.0.22631
Arch             : x64
Hostname         : tuzhis
Codepage         : 936
Build            : 22631
Hypervisor       :
RemoteSession    : true

System:
──────────────────────────────────────────────────────────────────────────────────────────
Manufacturer     : HP
Model            : Victus by HP 15L Gaming Desktop TG02-2xxx
Version          :
Virtual          :

CPU:
──────────────────────────────────────────────────────────────────────────────────────────
Manufacturer     : Intel
Brand            : Core™ i7-14700F
Family           : 6
Model            : 183
Stepping         : 1
Speed            : 2.1
Cores            : 28
PhysicalCores    : 20
PerformanceCores : 28
EfficiencyCores  :
Processors       : 1
Socket           : LGA1700

Additional context
This problem is probably caused by using promise.all to execute multiple PowerShell commands concurrently under Windows. You can change it to execute one by one. The execution will not be much slower, but the CPU utilization problem should be optimized a lot.

@sebhildebrandt
Copy link
Owner

@BlockCnFuture Yes, on Windows the overhead calling Opening a PowerShell process is really high. I do not suggest calling this every 2 seconds. I already mentioned the whole situation in this issue: #616

For version 6.0 I am working on something where we spin up a pool of PowerShell processes and they can then be used by each function ...

@TheRedfoox
Copy link

@sebhildebrandt Do you also not recommend calling every second graphics if si.powerShellStart(); is used? What is currently the right way to monitor GPU metrics every second?

@sebhildebrandt
Copy link
Owner

@TheRedfoox You are right, even when using si.powerShellStart() this is to often. This function needs a few seconds to get everything for the underlying window commands. Unfortunately this is all much slower in windows that in macOS or Linux.

@TheRedfoox
Copy link

@sebhildebrandt I did not find a way to call only some GPU metrics (only GPU usage) that allowed to lighten the call. Do you confirm to me that it is not possible at the moment or did I miss something in the document?

@sebhildebrandt
Copy link
Owner

@TheRedfoox Yes I can confirm that (for now). The only way would be probably to write C++ or C# code ...

@mboudreau
Copy link

Yeah, I've just hit this myself which is frustrating as we really wanted to show GPU usage/temp to our users.

@mboudreau
Copy link

@sebhildebrandt After looking into this, I don't think you need powershell at all. You can just use node's child_process to run exec on any command with pretty good performance. For instance, running something like node -e "const now = Date.now();require('child_process').exec('echo hello', () => console.log(Date.now()-now))" will be returned in 21ms for myself. You can actually run nvidia-smi this way and get it to output XML which can then be parsed into JSON.

I found a package that already does this, which is what I'm using right now. It's fully async and with my testing returns in about 250ms, which if you run the command directly, is about the same amount of time.

Unless absolutely necessary, I would remove all calls to using powershell for windows environments and simply use exec

@mboudreau
Copy link

Actually, scratch that, I had assumed it was because you used powershell to call nvidia-smi, but that doesn't seem to be the case. You are using exec, but seem to be using execSync instead of the async version which would block the thread entirely....

@TheRedfoox
Copy link

@mboudreau Do you have a solution in the meantime ? I call graphics["*"] but as I refresh my data every second it generates a very slight freeze at the time of the call.

I really don't know what solution to tinker with, or even a clue

@mboudreau
Copy link

@TheRedfoox I ended up calling nvidia-smi directly from node, which should be on the PATH for any user with nvidia drivers installed:

export const getGPUs = async (): Promise<GPUObject> =>
	new Promise((resolve, reject) => {
		const query = [
			'index',
			'name',
			'pci.bus_id',
			'pci.sub_device_id',
			'display_mode',
			'display_active',
			'fan.speed',
			'memory.total',
			'memory.used',
			'memory.free',
			'utilization.gpu',
			'utilization.memory',
			'temperature.gpu',
			'temperature.memory',
			'power.draw',
			'power.limit',
			'clocks.gr',
			'clocks.mem'
		]
		exec(`nvidia-smi --query-gpu=${query.join(',')} --format=csv,nounits,noheader`, (error, stdout) => {
			if (error) {
				return reject(error)
			}

			const gpus: GPU[] = compact(
				stdout
					.trim()
					.split(/\n/)
					.map((line) => {
						if (!line.trim()) {
							return undefined
						}
						const raw = line
							.trim()
							.split(', ')
							.map((v) => (v.includes('N/A') ? undefined : v))
						const bus = raw[2]?.split(':')
						return {
							index: Number(raw[0]),
							name: raw[1],
							domain: bus?.[0],
							busId: bus?.[1],
							device: bus?.[2].split('.')[0],
							function: bus?.[2].split('.')[1],
							subDeviceId: isNil(raw[3]) ? undefined : Number(raw[3]),
							displayMode: raw[4] === 'Enabled',
							displayActive: raw[5] === 'Enabled',
							fanSpeed: isNil(raw[6]) ? undefined : Number(raw[6]),
							memoryTotal: isNil(raw[7]) ? undefined : Number(raw[7]),
							memoryUsed: isNil(raw[8]) ? undefined : Number(raw[8]),
							memoryFree: isNil(raw[9]) ? undefined : Number(raw[9]),
							gpuUsage: isNil(raw[10]) ? undefined : Number(raw[10]),
							memoryUsage: isNil(raw[11]) ? undefined : Number(raw[11]),
							gpuTemp: isNil(raw[12]) ? undefined : Number(raw[12]),
							memoryTemp: isNil(raw[13]) ? undefined : Number(raw[13]),
							powerDraw: isNil(raw[14]) ? undefined : Number(raw[14]),
							powerLimit: isNil(raw[15]) ? undefined : Number(raw[15]),
							gpuClock: isNil(raw[16]) ? undefined : Number(raw[16]),
							memoryClock: isNil(raw[17]) ? undefined : Number(raw[17])
						}
					})
			)

			resolve(gpus)
		})
	})

This calls take between 60 to 90ms to get information from your GPUs. You can get it even faster if you reduce the amount of queries being done. Works perfectly in our system now. Getting it in XML format definitely slows things down in SI, so I used CSV instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants