free and vmstat command

from: https://fedoramagazine.org/system-insights-with-command-line-tools-free-and-vmstat/

free

$ free -h
       total    used    free   shared  buff/cache  available
Mem:    23Gi    14Gi   575Mi    3,3Gi        12Gi      8,8Gi
Swap:  8,0Gi   6,6Gi   1,4Gi

Free parses /proc/meminfo and prints totals for physical memory and swap, along with kernel buffers and cache. Use -h for human-readable units, -s 1 to refresh every second, and -c N to stop after N samples which is handy to get a trend when doing something in parallel. For example, free -s 60 -c 1440 gives a 24-hour CSV-friendly record without installing extra monitoring daemons.

Free memory refers to RAM that is entirely unoccupied. It isn’t being used by any process or for caching.

Available memory, on the other hand, represents an estimate of how much memory can be used by new or running processes without resorting to swap. It includes free memory plus parts of the cache and buffers that the system can reclaim quickly if needed.

It is not a problem to have a low free memory, available memory is usually what to be concerned about.

Spotting problems with free

  • Rapidly shrinking available combined with rising swap used indicates real memory pressure.

  • Large swap-in/out spikes point to thrashing workloads or runaway memory consumers.

vmstat

$ vmstat 1 3
procs -----------memory----------     
 r  b   swpd   free   buff  cache     
 2  0 7102404 1392528     36 12335148 
 0  0 7102404 1392560     36 12335188 
 0  0 7102404 1373640     36 12349928 

 ---swap-- -----io---- 
  si   so    bi    bo  
   8   21   130   724  
   0    0     0     0  
   0    0     8    48  

 -system-- -------cpu-------
 in     cs us sy id wa st gu
 2851   19 15  7 77  0  0  0
 5779 7246 14 10 77  0  0  0
 5141 6525 12  9 79  0  0  0

Catching a memory leak

Run vmstat 500 in one terminal while your suspect application runs in another. If free keeps falling and si/so climb over successive samples, physical RAM is being exhausted and the kernel starts swapping, which is classic leak behavior.

Finding I/O saturation

When wa (CPU wait) and bo (blocks out) soar while r remains modest, the CPU is idle but stuck waiting for the disk. Consider adding faster storage or tuning I/O scheduler parameters.

Detecting CPU over-commit

A sustained r that is double the number of logical cores with low wa and plenty of free means CPU is the bottleneck, not memory or I/O. Use top or htop to locate the busiest processes, or scale out workloads accordingly.

Last updated

Was this helpful?