您好,登錄后才能下訂單哦!
這篇文章將為大家詳細講解有關Uber jvm profiler如何使用,小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。
uber jvm profiler是用于在分布式監(jiān)控收集jvm 相關指標,如:cpu/memory/io/gc信息等
確保安裝了maven和JDK>=8前提下,直接mvn clean package
說明
直接以java agent的部署就可以使用
使用
java -javaagent:jvm-profiler-1.0.0.jar=reporter=com.uber.profiling.reporters.KafkaOutputReporter,brokerList='kafka1:9092',topicPrefix=demo_,tag=tag-demo,metricInterval=5000,sampleInterval=0 -cp target/jvm-profiler-1.0.0.jar
選項解釋
參數(shù) | 說明 |
---|---|
reporter | reporter類別, 此處直接默認為com.uber.profiling.reporters.KafkaOutputReporter就可以 |
brokerList | 如reporter為com.uber.profiling.reporters.KafkaOutputReporter,則brokerList為kafka列表,以逗號分隔 |
topicPrefix | 如reporter為com.uber.profiling.reporters.KafkaOutputReporter,則topicPrefix為kafka topic的前綴 |
tag | key為tag的metric,會輸出到reporter中 |
metricInterval | metric report的頻率,根據(jù)實際情況設置,單位為ms |
sampleInterval | jvm堆棧metrics report的頻率,根據(jù)實際情況設置,單位為ms |
結(jié)果展示
"nonHeapMemoryTotalUsed": 11890584.0, "bufferPools": [ { "totalCapacity": 0, "name": "direct", "count": 0, "memoryUsed": 0 }, { "totalCapacity": 0, "name": "mapped", "count": 0, "memoryUsed": 0 } ], "heapMemoryTotalUsed": 24330736.0, "epochMillis": 1515627003374, "nonHeapMemoryCommitted": 13565952.0, "heapMemoryCommitted": 257425408.0, "memoryPools": [ { "peakUsageMax": 251658240, "usageMax": 251658240, "peakUsageUsed": 1194496, "name": "Code Cache", "peakUsageCommitted": 2555904, "usageUsed": 1173504, "type": "Non-heap memory", "usageCommitted": 2555904 }, { "peakUsageMax": -1, "usageMax": -1, "peakUsageUsed": 9622920, "name": "Metaspace", "peakUsageCommitted": 9830400, "usageUsed": 9622920, "type": "Non-heap memory", "usageCommitted": 9830400 }, { "peakUsageMax": 1073741824, "usageMax": 1073741824, "peakUsageUsed": 1094160, "name": "Compressed Class Space", "peakUsageCommitted": 1179648, "usageUsed": 1094160, "type": "Non-heap memory", "usageCommitted": 1179648 }, { "peakUsageMax": 1409286144, "usageMax": 1409286144, "peakUsageUsed": 24330736, "name": "PS Eden Space", "peakUsageCommitted": 67108864, "usageUsed": 24330736, "type": "Heap memory", "usageCommitted": 67108864 }, { "peakUsageMax": 11010048, "usageMax": 11010048, "peakUsageUsed": 0, "name": "PS Survivor Space", "peakUsageCommitted": 11010048, "usageUsed": 0, "type": "Heap memory", "usageCommitted": 11010048 }, { "peakUsageMax": 2863661056, "usageMax": 2863661056, "peakUsageUsed": 0, "name": "PS Old Gen", "peakUsageCommitted": 179306496, "usageUsed": 0, "type": "Heap memory", "usageCommitted": 179306496 } ], "processCpuLoad": 0.0008024004394748531, "systemCpuLoad": 0.23138430784607697, "processCpuTime": 496918000, "appId": null, "name": "24103@machine01", "host": "machine01", "processUuid": "3c2ec835-749d-45ea-a7ec-e4b9fe17c23a", "tag": "mytag", "gc": [ { "collectionTime": 0, "name": "PS Scavenge", "collectionCount": 0 }, { "collectionTime": 0, "name": "PS MarkSweep", "collectionCount": 0 } ] }
說明
和java應用不同,需要把jvm-profiler.jar分發(fā)到各個節(jié)點上
使用
--jars hdfs:///public/libs/jvm-profiler-1.0.0.jar --conf spark.driver.extraJavaOptions=-javaagent:jvm-profiler-1.0.0.jar=reporter=com.uber.profiling.reporters.KafkaOutputReporter,brokerList='kafka1:9092',topicPrefix=demo_,tag=tag-demo,metricInterval=5000,sampleInterval=0 --conf spark.executor.extraJavaOptions=-javaagent:jvm-profiler-1.0.0.jar=reporter=com.uber.profiling.reporters.KafkaOutputReporter,brokerList='kafka1:9092',topicPrefix=demo_,tag=tag-demo,metricInterval=5000,sampleInterval=0
選項解釋
參數(shù) | 說明 |
---|---|
reporter | reporter類別, 此處直接默認為com.uber.profiling.reporters.KafkaOutputReporter就可以 |
brokerList | 如reporter為com.uber.profiling.reporters.KafkaOutputReporter,則brokerList為kafka列表,以逗號分隔 |
topicPrefix | 如reporter為com.uber.profiling.reporters.KafkaOutputReporter,則topicPrefix為kafka topic的前綴 |
tag | key為tag的metric,會輸出到reporter中 |
metricInterval | metric report的頻率,根據(jù)實際情況設置,單位為ms |
sampleInterval | jvm堆棧metrics report的頻率,根據(jù)實際情況設置,單位為ms |
結(jié)果展示
"nonHeapMemoryTotalUsed": 11890584.0, "bufferPools": [ { "totalCapacity": 0, "name": "direct", "count": 0, "memoryUsed": 0 }, { "totalCapacity": 0, "name": "mapped", "count": 0, "memoryUsed": 0 } ], "heapMemoryTotalUsed": 24330736.0, "epochMillis": 1515627003374, "nonHeapMemoryCommitted": 13565952.0, "heapMemoryCommitted": 257425408.0, "memoryPools": [ { "peakUsageMax": 251658240, "usageMax": 251658240, "peakUsageUsed": 1194496, "name": "Code Cache", "peakUsageCommitted": 2555904, "usageUsed": 1173504, "type": "Non-heap memory", "usageCommitted": 2555904 }, { "peakUsageMax": -1, "usageMax": -1, "peakUsageUsed": 9622920, "name": "Metaspace", "peakUsageCommitted": 9830400, "usageUsed": 9622920, "type": "Non-heap memory", "usageCommitted": 9830400 }, { "peakUsageMax": 1073741824, "usageMax": 1073741824, "peakUsageUsed": 1094160, "name": "Compressed Class Space", "peakUsageCommitted": 1179648, "usageUsed": 1094160, "type": "Non-heap memory", "usageCommitted": 1179648 }, { "peakUsageMax": 1409286144, "usageMax": 1409286144, "peakUsageUsed": 24330736, "name": "PS Eden Space", "peakUsageCommitted": 67108864, "usageUsed": 24330736, "type": "Heap memory", "usageCommitted": 67108864 }, { "peakUsageMax": 11010048, "usageMax": 11010048, "peakUsageUsed": 0, "name": "PS Survivor Space", "peakUsageCommitted": 11010048, "usageUsed": 0, "type": "Heap memory", "usageCommitted": 11010048 }, { "peakUsageMax": 2863661056, "usageMax": 2863661056, "peakUsageUsed": 0, "name": "PS Old Gen", "peakUsageCommitted": 179306496, "usageUsed": 0, "type": "Heap memory", "usageCommitted": 179306496 } ], "processCpuLoad": 0.0008024004394748531, "systemCpuLoad": 0.23138430784607697, "processCpuTime": 496918000, "appId": null, "name": "24103@machine01", "host": "machine01", "processUuid": "3c2ec835-749d-45ea-a7ec-e4b9fe17c23a", "tag": "mytag", "gc": [ { "collectionTime": 0, "name": "PS Scavenge", "collectionCount": 0 }, { "collectionTime": 0, "name": "PS MarkSweep", "collectionCount": 0 } ] }
已有的reporter
reporter | 說明 |
---|---|
ConsoleOutputReporter | 默認的repoter,一般用于調(diào)試 |
FileOutputReporter | 基于文件的reporter,分布式環(huán)境下不適用,得設置outputDir |
KafkaOutputReporter | 基于kafka的reporter,正式環(huán)境用的多,得設置brokerList,topicPrefix |
GraphiteOutputReporter | 基于Graphite的reporter,需設置graphite.host等配置 |
RedisOutputReporter | 基于redis的reporter,構(gòu)建命令 mvn -P redis clean package |
InfluxDBOutputReporter | 基于InfluxDB的reporter,構(gòu)建命令mvn -P influxdb clean package ,需設置influxdb.host等配置 |
建議在生產(chǎn)環(huán)境下使用KafkaOutputReporter,操作靈活性高,可以結(jié)合clickhouse grafana進行指標展示
源碼分析
該jvm-profiler整體是基于java agent實現(xiàn),項目pom文件 指定了MANIFEST.MF中的Premain-Class項和Agent-Class為com.uber.profiling.Agent 具體的實現(xiàn)類為AgentImpl
就具體的AgentImpl類的run方法來進行分析
public void run(Arguments arguments, Instrumentation instrumentation, Collection<AutoCloseable> objectsToCloseOnShutdown) { if (arguments.isNoop()) { logger.info("Agent noop is true, do not run anything"); return; } Reporter reporter = arguments.getReporter(); String processUuid = UUID.randomUUID().toString(); String appId = null; String appIdVariable = arguments.getAppIdVariable(); if (appIdVariable != null && !appIdVariable.isEmpty()) { appId = System.getenv(appIdVariable); } if (appId == null || appId.isEmpty()) { appId = SparkUtils.probeAppId(arguments.getAppIdRegex()); } if (!arguments.getDurationProfiling().isEmpty() || !arguments.getArgumentProfiling().isEmpty()) { instrumentation.addTransformer(new JavaAgentFileTransformer(arguments.getDurationProfiling(), arguments.getArgumentProfiling())); } List<Profiler> profilers = createProfilers(reporter, arguments, processUuid, appId); ProfilerGroup profilerGroup = startProfilers(profilers); Thread shutdownHook = new Thread(new ShutdownHookRunner(profilerGroup.getPeriodicProfilers(), Arrays.asList(reporter), objectsToCloseOnShutdown)); Runtime.getRuntime().addShutdownHook(shutdownHook); }
arguments.getReporter() 獲取reporter,如果沒有設置則設置為reporterConstructor,否則設置為指定的reporter
String appId ,設置appId,首先從配置中查找,如果沒有設置,再從env中查找,對于spark應用則取spark.app.id的值
List<Profiler> profilers = createProfilers(reporter, arguments, processUuid, appId),創(chuàng)建profilers,默認有CpuAndMemoryProfiler,ThreadInfoProfiler,ProcessInfoProfiler ;
1.其中CpuAndMemoryProfiler,ThreadInfoProfiler,ProcessInfoProfiler是從JMX中讀取數(shù)據(jù),ProcessInfoProfiler還會從 /pro讀取數(shù)據(jù);
2.如果設置了durationProfiling,argumentProfiling,sampleInterval,ioProfiling,則會增加對應的MethodDurationProfiler(輸出方法調(diào)用花費的時間),MethodArgumentProfiler(輸出方法參數(shù)的值),StacktraceReporterProfiler,IOProfiler;
3.MethodArgumentProfiler和MethodDurationProfiler利用javassist第三方字節(jié)碼編譯工具來改寫對應的類,具體實現(xiàn)參照JavaAgentFileTransformer
4.StacktraceReporterProfiler從JMX中讀取數(shù)據(jù)
5.IOProfiler則是讀取本地機器上的/pro文件對應的目錄的數(shù)據(jù)
ProfilerGroup profilerGroup = startProfilers(profilers) 開始進行profiler的定時report
其中還會區(qū)分oneTimeProfilers和periodicProfilers,ProcessInfoProfiler就屬于oneTimeProfilers,因為process的信息,在運行期間是不會變的,不需要周期行的reporter
至此,整個流程結(jié)束
關于“Uber jvm profiler如何使用”這篇文章就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,使各位可以學到更多知識,如果覺得文章不錯,請把它分享出去讓更多的人看到。
免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。