您的位置:首页 > 运维架构 > Linux

linux下用/proc/stat文件来计算cpu的利用率-c语言实现

2014-06-20 16:09 453 查看


proc文件系统介绍

/proc文件系统是一个伪文件系统,它只存在内存当中,而不占用外存空间。它以文件系统的方式为内核与进程提供通信的接口。用户和应用程序可以通过/proc得到系统的信息,并可以改变内核的某些参数。由于系统的信息,如进程,是动态改变的,所以用户或应用程序读取/proc目录中的文件时,proc文件系统是动态从系统内核读出所需信息并提交的。

/proc目录中有一些以数字命名的目录,它们是进程目录。系统中当前运行的每一个进程在/proc下都对应一个以进程号为目录名的目录/proc/pid,它们是读取进程信息的接口。此外,在Linux2.6.0-test6以上的版本中/proc/pid目录中有一个task目录,/proc/pid/task目录中也有一些以该进程所拥有的线程的线程号命名的目录/proc/pid/task/tid,它们是读取线程信息的接口。


/proc/stat文件

[cpp] view
plaincopy

[cpp] view
plaincopy

[root@root c_study]# cat /proc/stat

cpu 15579 99 13680 698457 10939 40 651 0 0

cpu0 1669 7 1974 338065 1396 5 9 0 0

cpu1 13910 91 11705 360391 9542 35 641 0 0

intr 957831 163 8 0 1 1 0 5 0 1 0 0 0 101 0 0 3582 0 37804 3657 22410 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

ctxt 501479

btime 1363495431

processes 40101

procs_running 1

procs_blocked 0

softirq 1396087 0 693403 12972 15932 35928 3 44577 479 592793

[root@root c_study]#

第一行的数值表示的是CPU总的使用情况,所以我们只要用第一行的数字计算就可以了。下表解析第一行各数值的含义:

参数 解析(单位:jiffies)

(jiffies是内核中的一个全局变量,用来记录自系统启动一来产生的节拍数,在linux中,一个节拍大致可理解为操作系统进程调度的最小时间片,不同linux内核可能值有不同,通常在1ms到10ms之间)

user ( 15579 ) 从系统启动开始累计到当前时刻,处于用户态的运行时间,不包含 nice值为负进程。

nice (99) 从系统启动开始累计到当前时刻,nice值为负的进程所占用的CPU时间

system (13680) 从系统启动开始累计到当前时刻,处于核心态的运行时间

idle (698457) 从系统启动开始累计到当前时刻,除IO等待时间以外的其它等待时间

iowait (10939) 从系统启动开始累计到当前时刻,IO等待时间(since 2.5.41)

irq (40) 从系统启动开始累计到当前时刻,硬中断时间(since 2.6.0-test4)

softirq (651) 从系统启动开始累计到当前时刻,软中断时间(since 2.6.0-test4)

stealstolen(0) which is the time spent in other operating systems when running in a virtualized environment(since 2.6.11)

guest(0) which is the time spent running a virtual CPU for guest operating systems under the control of the Linux kernel(since 2.6.24)

结论:总的cpu时间totalCpuTime = user + nice + system + idle + iowait + irq + softirq + stealstolen +guest


/proc/<pid>/stat文件

该文件包含了某一进程所有的活动的信息,该文件中的所有值都是从系统启动开始累计

到当前时刻。以下通过实例数据来说明该文件中各字段的含义。

[zhengangen@buick ~]# cat /proc/6873/stat

6873 (a.out) R 6723 6873 6723 34819 6873 8388608 77 0 0 0 41958 31 0 0 25 0 3 0 5882654 1409024 56 4294967295 134512640 134513720 3215579040 0 2097798 0 0 0 0 0 0 0 17 0 0 0

说明:以下只解释对我们计算Cpu使用率有用相关参数

参数 解释

pid=6873 进程号

utime=1587 该任务在用户态运行的时间,单位为jiffies

stime=41958 该任务在核心态运行的时间,单位为jiffies

cutime=0 所有已死线程在用户态运行的时间,单位为jiffies

cstime=0 所有已死在核心态运行的时间,单位为jiffies

结论:进程的总Cpu时间processCpuTime = utime + stime + cutime + cstime,该值包括其所有线程的cpu时间。


/proc/<pid>/task/<tid>/stat文件

该文件包含了某一进程所有的活动的信息,该文件中的所有值都是从系统启动开始累计到当前时刻。该文件的内容格式以及各字段的含义同/proc/<pid>/stat文件。

注意,该文件中的tid字段表示的不再是进程号,而是linux中的轻量级进程(lwp),即我们通常所说的线程。

结论:线程Cpu时间threadCpuTime = utime + stime


总的Cpu使用率计算

计算方法:

1、 采样两个足够短的时间间隔的Cpu快照,分别记作t1,t2,其中t1、t2的结构均为:

(user、nice、system、idle、iowait、irq、softirq、stealstolen、guest)的9元组;

2、 计算总的Cpu时间片totalCpuTime

a) 把第一次的所有cpu使用情况求和,得到s1;

b) 把第二次的所有cpu使用情况求和,得到s2;

c) s2 - s1得到这个时间间隔内的所有时间片,即totalCpuTime = j2 - j1 ;

3、计算空闲时间idle

idle对应第四列的数据,用第二次的idle - 第一次的idle即可

idle=第二次的idle - 第一次的idle

4、计算cpu使用率

pcpu =100* (total-idle)/total

5、同理可以用同样的方法求出其他进程和线程所占cpu资源


源码

[cpp] view
plaincopy

#include<stdio.h>

#include<stdlib.h>

#include<string.h>

#define __DEBUG__ 1

#define CK_TIME 1

int main(int argc ,char *argv[])

{

FILE *fp;

char buf[128];

char cpu[5];

long int user,nice,sys,idle,iowait,irq,softirq;

long int all1,all2,idle1,idle2;

float usage;

while(1)

{

fp = fopen("/proc/stat","r");

if(fp == NULL)

{

perror("fopen:");

exit (0);

}

fgets(buf,sizeof(buf),fp);

#if __DEBUG__

printf("buf=%s",buf);

#endif

sscanf(buf,"%s%d%d%d%d%d%d%d",cpu,&user,&nice,&sys,&idle,&iowait,&irq,&softirq);

/*

#if __DEBUG__

printf("%s,%d,%d,%d,%d,%d,%d,%d\n",cpu,user,nice,sys,idle,iowait,irq,softirq);

#endif

*/

all1 = user+nice+sys+idle+iowait+irq+softirq;

idle1 = idle;

rewind(fp);

/*第二次取数据*/

sleep(CK_TIME);

memset(buf,0,sizeof(buf));

cpu[0] = '\0';

user=nice=sys=idle=iowait=irq=softirq=0;

fgets(buf,sizeof(buf),fp);

#if __DEBUG__

printf("buf=%s",buf);

#endif

sscanf(buf,"%s%d%d%d%d%d%d%d",cpu,&user,&nice,&sys,&idle,&iowait,&irq,&softirq);

/*

#if __DEBUG__

printf("%s,%d,%d,%d,%d,%d,%d,%d\n",cpu,user,nice,sys,idle,iowait,irq,softirq);

#endif

*/

all2 = user+nice+sys+idle+iowait+irq+softirq;

idle2 = idle;

usage = (float)(all2-all1-(idle2-idle1)) / (all2-all1)*100 ;

printf("all=%d\n",all2-all1);

printf("ilde=%d\n",all2-all1-(idle2-idle1));

printf("cpu use = %.2f\%\n",usage);

printf("=======================\n");

fclose(fp);

}

return 1;

}

gcc编译

[cpp] view
plaincopy

gcc -o cpu_use -g cpu_use.c

运行

[cpp] view
plaincopy

buf=cpu 15824 100 13772 879622 11014 40 720 0 0

buf=cpu 15837 100 13790 879731 11014 40 720 0 0

all=140

ilde=31

cpu use = 22.14%

=======================

buf=cpu 15837 100 13790 879731 11014 40 720 0 0

buf=cpu 15857 100 13824 879786 11014 40 721 0 0

all=110

ilde=55

cpu use = 50.00%

=======================

buf=cpu 15857 100 13824 879786 11014 40 721 0 0

buf=cpu 15877 100 13856 879842 11014 41 721 0 0

all=109

ilde=53

cpu use = 48.62%

=======================

buf=cpu 15877 100 13857 879842 11014 41 721 0 0

buf=cpu 15897 100 13889 879901 11014 41 722 0 0

all=112

ilde=53

cpu use = 47.32%

=======================

代码只是简单的获取出CPU使用率,如果用到系统里,还是要用线程的。

#include <stdio.h>

#include <stdlib.h>

#include <unistd.h>

#include <stdarg.h>

#include <errno.h>

#define STATES_line2x4 "%s\03" \

" %#5.1f%% \02user,\03 %#5.1f%% \02system,\03 %#5.1f%% \02nice,\03 %#5.1f%% \02idle\03\n"

static const char *States_fmts = STATES_line2x4;

//Total number of CPU

static int Cpu_tot;

// These typedefs attempt to ensure consistent 'ticks' handling

typedef unsigned long long TIC_t;

// This structure stores a frame's cpu tics used in history

// calculations. It exists primarily for SMP support but serves

// all environments.

typedef struct CPU_t {

TIC_t u, n, s, i, w, x, y; // as represented in /proc/stat

TIC_t u_sav, s_sav, n_sav, i_sav, w_sav, x_sav, y_sav; // in the order of our display

unsigned id; // the CPU ID number

} CPU_t;

// This routine simply formats whatever the caller wants and

// returns a pointer to the resulting 'const char' string...

static const char *fmtmk (const char *fmts, ...)

{

static char buf[2048]; // with help stuff, our buffer

va_list va; // requirements exceed 1k

va_start(va, fmts);

vsnprintf(buf, sizeof(buf), fmts, va);

va_end(va);

return (const char *)buf;

}

static CPU_t *cpus_refresh (CPU_t *cpus)

{

static FILE *fp = NULL;

int i;

int num;

// enough for a /proc/stat CPU line (not the intr line)

char buf[256+64];

if (!fp) {

if (!(fp = fopen("/proc/stat", "r")))

printf("Failed /proc/stat open: %s", strerror(errno));

//cpus = calloc(1, (1 + Cpu_tot) * sizeof(CPU_t));

cpus = (CPU_t *)malloc((1 + Cpu_tot) * sizeof(CPU_t));

memset(cpus, '\0', (1 + Cpu_tot) * sizeof(CPU_t));

}

rewind(fp);

fflush(fp);

// first value the last slot with the cpu summary line

if (!fgets(buf, sizeof(buf), fp)) printf("failed /proc/stat read\n");

cpus[Cpu_tot].x = 0; // FIXME: can't tell by kernel version number

cpus[Cpu_tot].y = 0; // FIXME: can't tell by kernel version number

num = sscanf(buf, "cpu %Lu %Lu %Lu %Lu %Lu %Lu %Lu",

&cpus[Cpu_tot].u,

&cpus[Cpu_tot].n,

&cpus[Cpu_tot].s,

&cpus[Cpu_tot].i,

&cpus[Cpu_tot].w,

&cpus[Cpu_tot].x,

&cpus[Cpu_tot].y

);

if (num < 4)

printf("failed /proc/stat read\n");

// and just in case we're 2.2.xx compiled without SMP support...

if (Cpu_tot == 1) {

cpus[1].id = 0;

memcpy(cpus, &cpus[1], sizeof(CPU_t));

}

// now value each separate cpu's tics

for (i = 0; 1 < Cpu_tot && i < Cpu_tot; i++) {

if (!fgets(buf, sizeof(buf), fp)) printf("failed /proc/stat read\n");

cpus[i].x = 0; // FIXME: can't tell by kernel version number

cpus[i].y = 0; // FIXME: can't tell by kernel version number

num = sscanf(buf, "cpu%u %Lu %Lu %Lu %Lu %Lu %Lu %Lu",

&cpus[i].id,

&cpus[i].u, &cpus[i].n, &cpus[i].s, &cpus[i].i, &cpus[i].w, &cpus[i].x, &cpus[i].y

);

if (num < 4)

printf("failed /proc/stat read\n");

}

return cpus;

}

static void summaryhlp (CPU_t *cpu, const char *pfx)

{

// we'll trim to zero if we get negative time ticks,

// which has happened with some SMP kernels (pre-2.4?)

#define TRIMz(x) ((tz = (long long)(x)) < 0 ? 0 : tz)

long long u_frme, s_frme, n_frme, i_frme, w_frme, x_frme, y_frme, tot_frme, tz;

float scale;

if(cpu == NULL){

printf("NULL@\n");

return;

}

printf("u = %Lu, u_sav = %Lu\n", cpu->u, cpu->u_sav);

u_frme = cpu->u - cpu->u_sav;

s_frme = cpu->s - cpu->s_sav;

n_frme = cpu->n - cpu->n_sav;

i_frme = TRIMz(cpu->i - cpu->i_sav);

w_frme = cpu->w - cpu->w_sav;

x_frme = cpu->x - cpu->x_sav;

y_frme = cpu->y - cpu->y_sav;

tot_frme = u_frme + s_frme + n_frme + i_frme + w_frme + x_frme + y_frme;

if (tot_frme < 1) tot_frme = 1;

scale = 100.0 / (float)tot_frme;

printf("scale = %0.5f\n", scale);

// display some kinda' cpu state percentages

// (who or what is explained by the passed prefix)

printf(States_fmts,

pfx,

(float)u_frme * scale,

(float)s_frme * scale,

(float)n_frme * scale,

(float)i_frme * scale,

(float)w_frme * scale,

(float)x_frme * scale,

(float)y_frme * scale

);

// remember for next time around

cpu->u_sav = cpu->u;

cpu->s_sav = cpu->s;

cpu->n_sav = cpu->n;

cpu->i_sav = cpu->i;

cpu->w_sav = cpu->w;

cpu->x_sav = cpu->x;

cpu->y_sav = cpu->y;

#undef TRIMz

}

int main()

{

static CPU_t * smpcpu = NULL;

for(;;){

Cpu_tot = sysconf(_SC_NPROCESSORS_ONLN);

printf("CPU number: %ld\n", Cpu_tot);

smpcpu = cpus_refresh(smpcpu);

summaryhlp(&smpcpu[Cpu_tot], "Cpu(s):");

printf("\n");

sleep(3);

smpcpu = cpus_refresh(smpcpu);

summaryhlp(&smpcpu[Cpu_tot], "Cpu(s):");

printf("++++++++++++++++++++++++++\n");

}

return 0;

}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: