Milos Vyletel [Mon, 8 Jun 2015 14:50:16 +0000 (16:50 +0200)]
perf tools: Avoid possible race condition in copyfile()
Use unique temporary files when copying to buildid dir to prevent races
in case multiple instances are trying to copy same file. This is done by
- creating template in form <path>/.<filename>.XXXXXX where the suffix is
used by mkstemp() to create unique file
- change file mode
- copy content
- if successful link temp file to target file
- unlink temp file
At this point the only file left at target path should be the desired
one either created by us or other instance if we raced. This should also
prevent not yet fully copied files to be visible to to other perf
instances that could try to parse them.
On top of that slow_copyfile no longer needs to deal with file mode when
creating file since temporary file is already created and mode is set.
Succesfully tested by myself by running perf record, archive and reading
the data on other system and by running perf buildid-cache on perf
binary itself. I also did revert fix from 0635b0f that to exposes
previously fixed race with EEXIST and recreator test passed sucessfully.
Signed-off-by: Milos Vyletel <milos@redhat.com> Acked-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/1433775018-19868-1-git-send-email-milos@redhat.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This has a different model than the 'thread' and 'map' struct lifetimes:
there is not a definitive "don't use this DSO anymore" event, i.e. we may
get many 'struct map' holding references to the '/usr/lib64/libc-2.20.so'
DSO but then at some point some DSO may have no references but we still
don't want to straight away release its resources, because "soon" we may
get a new 'struct map' that needs it and we want to reuse its symtab or
other resources.
So we need some way to garbage collect it when crossing some memory
usage threshold, which is left for anoter patch, for now it is
sufficient to release it when calling dsos__exit(), i.e. when deleting
the whole list as part of deleting the 'struct machine' containing it,
which will leave only referenced objects being used.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-majzgz07cm90t2tejrjy4clf@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf tools: Protect accesses the dso rbtrees/lists with a rw lock
To allow concurrent access, next step: refcount struct dso instances, so
that we can ditch unused them when the last map pointing to it goes
away.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-yk1k08etpd2aoe3tnrf0oizn@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Calling the function 'machine__new_module' implies a new 'module' will
be allocated, when in fact what is returned is a 'struct map' instance,
that not necessarily will be instantiated, as if one already exists with
the given module name, it will be returned instead.
So be consistent with other "find and if not there, create" like
functions, like machine__findnew_thread, machine__findnew_dso, etc, and
rename it to machine__findnew_module_map(), that in turn will call
machine__findnew_module_dso().
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-acv830vd3hwww2ih5vjtbmu3@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
He Kuang [Thu, 28 May 2015 13:17:30 +0000 (13:17 +0000)]
perf record: Fix perf.data size in no-buildid mode
The size of perf.data is missing update in no-buildid mode, which gives
wrong output result.
Before this patch:
$ perf.perf record -B -e syscalls:sys_enter_open uname
Linux
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.000 MB perf.data ]
After this patch:
$ perf.perf record -B -e syscalls:sys_enter_open uname
Linux
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.001 MB perf.data ]
He Kuang [Thu, 28 May 2015 13:28:54 +0000 (13:28 +0000)]
tools lib traceevent: Export dynamic symbols used by traceevent plugins
Traceevent plugins need dynamic symbols exported from libtraceevent.a,
otherwise a dlopen error will occur during plugins loading.
This patch uses dynamic-list-file to export dynamic symbols which will
be used in plugins to perf executable.
The problem is covered up if feature-libpython is enabled, because
PYTHON_EMBED_LDOPTS contains '-Xlinker --export-dynamic' which adds all
symbols to the dynamic symbol table. So we should reproduce the problem
by setting NO_LIBPYTHON=1.
Before this patch:
(Prepare plugins)
$ ls /root/.traceevent/plugins/
plugin_sched_switch.so
plugin_function.so
...
$ perf record -e 'ftrace:function' ls
$ perf script
Warning: could not load plugin '/mnt/data/root/.traceevent/plugins/plugin_sched_switch.so'
/root/.traceevent/plugins/plugin_sched_switch.so: undefined symbol: pevent_unregister_event_handler
Kan Liang [Sun, 10 May 2015 19:13:15 +0000 (15:13 -0400)]
perf tools: handle PERF_RECORD_LOST_SAMPLES
This patch modifies the perf tool to handle the new RECORD type,
PERF_RECORD_LOST_SAMPLES.
The number of lost-sample events is stored in
.nr_events[PERF_RECORD_LOST_SAMPLES]. The exact number of samples
which the kernel dropped is stored in total_lost_samples.
When the percentage of dropped samples is greater than 5%, a warning
is printed.
Here are some examples:
Eg 1, Recording different frequently-occurring events is safe with the
patch. Only a very low drop rate is associated with such actions.
$ perf record -e '{cycles:p,instructions:p}' -c 20003 --no-time ~/tchain ~/tchain
$ perf report --stdio --group
# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 24
#
# Samples: 120K of event 'anon group { cycles:p, instructions:p }'
# Event count (approx.): 24048600000
#
# Overhead Command Shared Object Symbol
# ................ ........... ................
..................................
#
99.74% 99.86% tchain_edit tchain_edit [.] f3
0.09% 0.02% tchain_edit tchain_edit [.] f2
0.04% 0.00% tchain_edit [kernel.vmlinux] [k] ixgbe_read_reg
Eg 2, Recording the same thing multiple times can lead to high drop
rate, but it is not a useful configuration.
$ perf record -e '{cycles:p,cycles:p}' -c 20003 --no-time ~/tchain
Warning: Processed 600592 samples and lost 99.73% samples!
[perf record: Woken up 148 times to write data]
[perf record: Captured and wrote 36.922 MB perf.data (1206322 samples)]
[perf record: Woken up 1 times to write data]
[perf record: Captured and wrote 0.121 MB perf.data (1629 samples)]
Signed-off-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@infradead.org Cc: eranian@google.com Link: http://lkml.kernel.org/r/1431285195-14269-9-git-send-email-kan.liang@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
After enlarging the PEBS interrupt threshold, there may be some mixed up
PEBS samples which are discarded by the kernel.
This patch makes the kernel emit a PERF_RECORD_LOST_SAMPLES record with
the number of possible discarded records when it is impossible to demux
the samples.
It makes sure the user is not left in the dark about such discards.
Signed-off-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@infradead.org Cc: eranian@google.com Link: http://lkml.kernel.org/r/1431285195-14269-8-git-send-email-kan.liang@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Yan, Zheng [Wed, 6 May 2015 19:33:52 +0000 (15:33 -0400)]
perf/intel/x86: Enlarge the PEBS buffer
Currently the PEBS buffer size is 4k, it can only hold about 21
PEBS records. This patch enlarges the PEBS buffer size to 64k
(the same as the BTS buffer).
64k memory can hold about 330 PEBS records. This will significantly
reduce the number of PMIs when batched PEBS interrupts are enabled.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com> Signed-off-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@infradead.org Cc: eranian@google.com Link: http://lkml.kernel.org/r/1430940834-8964-7-git-send-email-kan.liang@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
PEBS always had the capability to log samples to its buffers without
an interrupt. Traditionally perf has not used this but always set the
PEBS threshold to one.
For frequently occurring events (like cycles or branches or load/store)
this in term requires using a relatively high sampling period to avoid
overloading the system, by only processing PMIs. This in term increases
sampling error.
For the common cases we still need to use the PMI because the PEBS
hardware has various limitations. The biggest one is that it can not
supply a callgraph. It also requires setting a fixed period, as the
hardware does not support adaptive period. Another issue is that it
cannot supply a time stamp and some other options. To supply a TID it
requires flushing on context switch. It can however supply the IP, the
load/store address, TSX information, registers, and some other things.
So we can make PEBS work for some specific cases, basically as long as
you can do without a callgraph and can set the period you can use this
new PEBS mode.
The main benefit is the ability to support much lower sampling period
(down to -c 1000) without extensive overhead.
One use cases is for example to increase the resolution of the c2c tool.
Another is double checking when you suspect the standard sampling has
too much sampling error.
Some numbers on the overhead, using cycle soak, comparing the elapsed
time from "kernbench -M -H" between plain (threshold set to one) and
multi (large threshold).
The test command for plain:
"perf record --time -e cycles:p -c $period -- kernbench -M -H"
The test command for multi:
"perf record --no-time -e cycles:p -c $period -- kernbench -M -H"
( The only difference of test command between multi and plain is time
stamp options. Since time stamp is not supported by large PEBS
threshold, it can be used as a flag to indicate if large threshold is
enabled during the test. )
With periods below 100003, plain (threshold one) cause much more
overhead. With 10003 sampling period, the Elapsed Time for multi is
even 2X faster than plain.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com> Signed-off-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@infradead.org Cc: eranian@google.com Link: http://lkml.kernel.org/r/1430940834-8964-5-git-send-email-kan.liang@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Yan, Zheng [Wed, 6 May 2015 19:33:49 +0000 (15:33 -0400)]
perf/x86/intel: Handle multiple records in the PEBS buffer
When the PEBS interrupt threshold is larger than one record and the
machine supports multiple PEBS events, the records of these events are
mixed up and we need to demultiplex them.
Demuxing the records is hard because the hardware is deficient. The
hardware has two issues that, when combined, create impossible
scenarios to demux.
The first issue is that the 'status' field of the PEBS record is a copy
of the GLOBAL_STATUS MSR at PEBS assist time. To see why this is a
problem let us first describe the regular PEBS cycle:
A) the CTRn value reaches 0:
- the corresponding bit in GLOBAL_STATUS gets set
- we start arming the hardware assist
< some unspecified amount of time later -- this could cover multiple
events of interest >
B) the hardware assist is armed, any next event will trigger it
C) a matching event happens:
- the hardware assist triggers and generates a PEBS record
this includes a copy of GLOBAL_STATUS at this moment
- if we auto-reload we (re)set CTRn
- we clear the relevant bit in GLOBAL_STATUS
Now consider the following chain of events:
A0, B0, A1, C0
The event generated for counter 0 will include a status with counter 1
set, even though its not at all related to the record. A similar thing
can happen with a !PEBS event if it just happens to overflow at the
right moment.
The second issue is that the hardware will only emit one record for two
or more counters if the event that triggers the assist is 'close'. The
'close' can be several cycles. In some cases even the complete assist,
if the event is something that doesn't need retirement.
For instance, consider this chain of events:
A0, B0, A1, B1, C01
Where C01 is an event that triggers both hardware assists, we will
generate but a single record, but again with both counters listed in the
status field.
This time the record pertains to both events.
Note that these two cases are different but undistinguishable with the
data as generated. Therefore demuxing records with multiple PEBS bits
(we can safely ignore status bits for !PEBS counters) is impossible.
Furthermore we cannot emit the record to both events because that might
cause a data leak -- the events might not have the same privileges -- so
what this patch does is discard such events.
The assumption/hope is that such discards will be rare.
Here lists some possible ways you may get high discard rate.
- when you count the same thing multiple times. But it is not a useful
configuration.
- you can be unfortunate if you measure with a userspace only PEBS
event along with either a kernel or unrestricted PEBS event. Imagine
the event triggering and setting the overflow flag right before
entering the kernel. Then all kernel side events will end up with
multiple bits set.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com> Signed-off-by: Kan Liang <kan.liang@intel.com>
[ Changelog improvements. ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@infradead.org Cc: eranian@google.com Link: http://lkml.kernel.org/r/1430940834-8964-4-git-send-email-kan.liang@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Yan, Zheng [Wed, 6 May 2015 19:33:47 +0000 (15:33 -0400)]
perf/x86/intel: Use the PEBS auto reload mechanism when possible
When a fixed period is specified, this patch makes perf use the PEBS
auto reload mechanism. This makes normal profiling faster, because
it avoids one costly MSR write in the PMI handler.
However, the reset value will be loaded by hardware assist. There is a
small delay compared to the previous non-auto-reload mechanism. The
delay time is arbitrary, but very small. The assist cost is 400-800
cycles, assuming common cases with everything cached. The minimum period
the patch currently uses is 10000. In that extreme case it can be ~10%
if cycles are used.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com> Signed-off-by: Kan Liang <kan.liang@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@infradead.org Cc: eranian@google.com Link: http://lkml.kernel.org/r/1430940834-8964-2-git-send-email-kan.liang@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Stephane Eranian [Thu, 14 May 2015 21:09:59 +0000 (23:09 +0200)]
perf/x86/intel: add support for PERF_SAMPLE_BRANCH_IND_JUMP
This patch enables support for branch sampling filter
for indirect jumps (IND_JUMP). It enables LBR IND_JMP
filtering where available. There is also software filtering
support.
Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andi Kleen <ak@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@redhat.com Cc: dsahern@gmail.com Cc: jolsa@redhat.com Cc: kan.liang@intel.com Cc: namhyung@kernel.org Link: http://lkml.kernel.org/r/1431637800-31061-3-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Stephane Eranian [Thu, 14 May 2015 21:09:58 +0000 (23:09 +0200)]
perf: add new PERF_SAMPLE_BRANCH_IND_JUMP branch sample type
This patch adds a new branch_sample_type flag to enable
filtering branch sampling to indirect jumps. The support
is subject to hardware or kernel software support on each
architecture.
Filtering on indirect jump is useful to study the targets
of the jump.
Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andi Kleen <ak@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@redhat.com Cc: dsahern@gmail.com Cc: jolsa@redhat.com Cc: kan.liang@intel.com Cc: namhyung@kernel.org Link: http://lkml.kernel.org/r/1431637800-31061-2-git-send-email-eranian@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Wang Nan [Wed, 3 Jun 2015 08:52:21 +0000 (08:52 +0000)]
perf tools: Deal with kernel module names in '[]' correctly
Before patch ba92732e9808 ('perf kmaps: Check kmaps to make code more
robust'), 'perf report' and 'perf annotate' will segfault if trace data
contains kernel module information like this:
This is because __kmod_path__parse treats '[' leading names as kernel
name instead of names of kernel module.
If perf.data contains build information and the buildid of such modules
can be found, the dso->kernel of it will be set to DSO_TYPE_KERNEL by
__event_process_build_id(), not kernel module.
It will then be passed to dso__load() -> dso__load_kernel_sym() ->
dso__load_kcore() if --kallsyms is provided.
The refered patch adds NULL pointer checker to avoid segfault. However,
such kernel modules are still processed incorrectly.
This patch fixes __kmod_path__parse, makes it treat names like
'[test_module]' as kernel modules.
kmod-path.c is also update to reflect the above changes.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Link: http://lkml.kernel.org/r/1433321541-170245-1-git-send-email-wangnan0@huawei.com
[ Fixed the merged with 0443f36b0de0 ("perf machine: Fix the search
for the kernel DSO on the unified list" ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Mon, 1 Jun 2015 07:37:48 +0000 (07:37 +0000)]
tools: Move tools/perf/util/include/linux/{list.h,poison.h} to tools/include
This patch moves list.h from tools/perf/util/include/linux/list.h to
tools/include/linux/list.h to enable other libraries use macros in it,
like libbpf which will be introduced by further patches. Since list.h
depend on poison.h, poison.h is also moved.
Both file use relative path, so one '..' is removed for each header to
make them suit for new directory.
MANIFEST is also updated for 'make perf-*-src-pkg'.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1433144296-74992-3-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Mon, 1 Jun 2015 07:37:47 +0000 (07:37 +0000)]
perf tools: Move linux/kernel.h to tools/include
This patch moves kernel.h from tools/perf/util/include/linux/kernel.h
to tools/include/linux/kernel.h to enable other libraries use macros in
it, like libbpf which will be introduced by further patches.
MANIFEST is also updated for 'make perf-*-src-pkg'.
Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Cc: Brendan Gregg <brendan.d.gregg@gmail.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David Ahern <dsahern@gmail.com> Cc: He Kuang <hekuang@huawei.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kaixu Xia <xiakaixu@huawei.com> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Zefan Li <lizefan@huawei.com> Cc: pi3orama@163.com Link: http://lkml.kernel.org/r/1433144296-74992-2-git-send-email-wangnan0@huawei.com
[ Fixed up the ifdef guard to match other entries in tools/include/linux ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf machine: Fix the search for the kernel DSO on the unified list
When unifying the user_dsos and kernel_dsos a bug was introduced by
inverting the check for dso->kernel, fix it.
Fixes: 3d39ac538629 ("perf machine: No need to have two DSOs lists") Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-xnrnq0kams3s2z9ek1wjb506@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf tools: Remove newline char when reading event scale and unit
The <fd979c013207> commit intruduced the perf_event_sysfs_show function
to display the event_str value of an attr in kernel/event/core.c. But
the function returns the value with a newline char.
So, if a event also carries a event.unit file, when printing the counter
data perf tool formatting goes for a spin.
That is, because of the event unit, event name is printed in the newline
because of perf_event_sysfs_show returns with a newline char.
Now fixing perf core will break API, hencing proposing a fix in the perf tool.
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Link: http://lkml.kernel.org/r/1433052383-21802-1-git-send-email-maddy@linux.vnet.ibm.com
[ Add spaces around operators ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Wang Nan [Fri, 29 May 2015 09:45:47 +0000 (09:45 +0000)]
perf probe: Fix segfault when glob matching function without debuginfo
Commit 4c859351226c920b227fec040a3b447f0d482af3 ("perf probe: Support
glob wildcards for function name") introduces segfault problems when
debuginfo is not available:
# perf probe 'sys_w*'
Added new events:
Segmentation fault
The first problem resides in find_probe_trace_events_from_map(). In
that function, find_probe_functions() is called to match each symbol
against glob to find the number of matching functions, but still use
map__for_each_symbol_by_name() to find 'struct symbol' for matching
functions. Unfortunately, map__for_each_symbol_by_name() does
exact matching by searching in an rbtree.
It doesn't know glob matching, and not easy for it to support it because
it use rbtree based binary search, but we are unable to ensure all names
matched by the glob (any glob passed by user) reside in one subtree.
This patch drops map__for_each_symbol_by_name(). Since there is no
rbtree again, re-matching all symbols costs a lot. This patch avoid it
by saving all matching results into an array (syms).
The second problem is the lost of tp->realname. In
__add_probe_trace_events(), if pev->point.function is glob, the event
name should be set to tev->point.realname. This patch ensures its
existence by strdup sym->name instead of leaving a NULL pointer there.
After this patch:
# perf probe 'sys_w*'
Added new events:
probe:sys_waitid (on sys_w*)
probe:sys_wait4 (on sys_w*)
probe:sys_waitpid (on sys_w*)
probe:sys_write (on sys_w*)
probe:sys_writev (on sys_w*)
You can now use it in all perf tools, such as:
perf record -e probe:sys_writev -aR sleep 1
Signed-off-by: Wang Nan <wangnan0@huawei.com> Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Zefan Li <lizefan@huawei.com> Link: http://lkml.kernel.org/r/1432892747-232506-1-git-send-email-wangnan0@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Namhyung Kim [Fri, 29 May 2015 12:53:44 +0000 (21:53 +0900)]
perf tools: Make Ctrl-C stop processing on TUI
It was inconvenient that perf cannot be quit with SIGINT during
processing samples on TUI especially for large data files.
This was because the first argument of SLang_init_tty(), abort_char,
being 0. The manual says it's the ascii value of the control character
that will be used to generate the interrupt signal [1]. Passing -1
means to use the default value (Ctrl-C).
However, after processing samples, Ctrl-C was used to in other cases as
well - like stepping back from annotate. So recover the original
behavior after processing.
Signed-off-by: Namhyung Kim <namhyung@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1432904024-13170-1-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Jiri Olsa [Fri, 29 May 2015 15:42:58 +0000 (17:42 +0200)]
perf build: Do not fail on missing Build file
Allow nesting into directories without Build file. Currently we force
include of the Build file, which fails the build when the Build file is
missing.
We already support empty *-in.o' objects if there's nothing in the
directory to be compiled, so we can just use it for missing Build file
cases.
Also adding this case under tests.
Reported-by: Rabin Vincent <rabin.vincent@axis.com> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Rabin Vincent <rabin.vincent@axis.com> Link: http://lkml.kernel.org/r/1432914178-24086-1-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
1) There is no 'struct vdso' for us to have vdso__ prefixed routines.
2) Because it will not really just create a new instance of 'struct
dso', it'll call dso__new() but it will also insert it into the
DSO's list/rbtree, and we have a method name for that: 'addnew',
just like we have dsos__addnew().
3) So it is really a 'struct machine' operation, it is the first
argument, etc.
This way the place where this is used gets consistent:
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-r3w3tvh8exm9xfz3p4tz9qbz@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Similar to machine__findnew_thread(), also prepping for refcounting and
locking, this time for struct dso instances.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-fv3tshv5o1413coh147lszjc@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
4: read samples using the mmap interface: read samples using the mmap interface: FAILED!
This is because arm64 doesn't come with getpgrp() syscall. The syscall
is a BSD compatibility wrapper, Archs that don't define
__ARCH_WANT_SYS_GETPGRP do not have this. Remove it, since getpgid is
already used in the testcase.
Riku Voipio [Fri, 29 May 2015 15:36:11 +0000 (12:36 -0300)]
perf tests: Aename open*.c to openat*.c
Since the test being tested is now openat rather than open, rename the
files to make it explicit. The patch is separeted from the first to make
it simpler to deal with any potential conflicts in the Makefile
Multiple perf tests fail on arm64 due to missing open syscall:
2: detect open syscall event : FAILED!
open(2) is a legacy syscall, replaced with openat(2) since 2.6.16. Thus
new architectures in kernel, such as arm64, don't implement these legacy
syscalls.
The patch replaces all sys_enter_open events with sys_enter_openat,
renames the related tests and test output to avoid confusion.
perf kmem: Fix compiler warning about may be accessing uninitialized variable
The last argument to strtok_r doesn't need to be initialized, its just a
placeholder to make this routine reentrant, but gcc doesn't know about
that and complains, breaking the build, fix it by setting it to NULL.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Link: http://lkml.kernel.org/n/tip-8e8rgbg3aom9uarsyqjrsctg@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Adrian Hunter [Fri, 29 May 2015 13:33:29 +0000 (16:33 +0300)]
perf db-export: Fix thread ref-counting
Thread ref-counting was not done for get_main_thread() meaning that
there was a thread__get() from machine__find_thread() that was not being
paired with thread__put(). Fix that.
Wang Nan [Thu, 28 May 2015 02:25:05 +0000 (02:25 +0000)]
perf probe: Fix 'function unused' warning
By 'make build-test' a warning is found in probe-event.c that, after
commit 419e873828 (perf probe: Show the error reason comes from
invalid DSO) the only user of kernel_get_module_dso() is
open_debuginfo(). Which is not compiled if HAVE_DWARF_SUPPORT not set.
'make build-test' found this problem when make_minimal.
This patch moves kernel_get_module_dso() to HAVE_DWARF_SUPPORT ifdef
section.
Ingo Molnar [Thu, 28 May 2015 09:09:22 +0000 (11:09 +0200)]
Merge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
Pull perf/core refactorings and improvements from Arnaldo Carvalho de Melo:
User visible changes:
- Add hint for 'Too many events are opened.' error message (Jiri Olsa)
Infrastructure changes:
- Protect accesses to map rbtrees with a lock and refcount struct map,
reducing memory usage as maps not used get freed. The 'dso' struct is
next in line. (Arnaldo Carvalho de Melo)
- Annotation and branch related option parsing refactorings to
share code with upcoming patches (Andi Kleen)
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Jiri Olsa [Mon, 25 May 2015 20:51:54 +0000 (22:51 +0200)]
perf tools: Add hint for 'Too many events are opened.' error message
Enhancing the 'Too many events are opened.' error message with hint to
use use 'ulimit -n <limit>' command.
Before:
$ perf record -e 'sched:*,syscalls:*' ls
Error:
Too many events are opened.
Try again after reducing the number of events.
Now:
$ perf record -e 'sched:*,syscalls:*' ls
Error:
Too many events are opened.
Probably the maximum number of open file descriptors has been reached.
Hint: Try again after reducing the number of events.
Hint: Try increasing the limit with 'ulimit -n <limit>'
Signed-off-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1432587114-14924-1-git-send-email-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
We have pointers to struct map instances in several places, like in the
hist_entry instances, so we need a way to know when we can destroy them,
otherwise we may either keep leaking them or end up referencing deleted
instances.
Start fixing it by reference counting them.
This patch puts the reference count for struct map in place, replacing
direct map__delete() calls with map__put() ones and then grabbing a
reference count when adding it to the maps struct where maps for a
struct thread are kept.
Next we'll grab reference counts when setting pointers to struct map
instances, in places like in the hist_entry code.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-wi19xczk0t2a41r1i2chuio5@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf tools: Protect accesses the map rbtrees with a rw lock
To allow concurrent access, next step: refcount struct map instances, so
that we can ditch maps->removed_maps and stop leaking threads, maps,
then struct DSO needs the same treatment.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-o45w2w5dzrza38nzqxnqzhyf@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
That for now has the maps rbtree and the list for the dead maps, that
may be still referenced from some hist_entry, etc.
This paves the way for protecting the rbtree with a lock, then refcount
the maps and finally remove the removed_maps list, as it'll not ne
anymore needed.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-fl0fa6142pj8khj97fow3uw0@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Masami Hiramatsu [Wed, 27 May 2015 08:37:18 +0000 (17:37 +0900)]
perf probe: Show the error reason comes from invalid DSO
Show the reason of error when dso__load* fails. This shows when user
gives wrong kernel image or wrong path.
Without this, perf probe shows an obscure message:
----
$ perf probe -k ~/kbin/linux-3.x86_64/vmlinux -L vfs_read
Failed to find path of kernel module.
Error: Failed to show lines.
----
With this, perf shows appropriate error message:
----
$ perf probe -k ~/kbin/linux-3.x86_64/vmlinux -L vfs_read
Failed to find the path for kernel: Mismatching build id
Error: Failed to show lines.
----
And:
----
$ perf probe -k /non-exist/kernel/vmlinux -L vfs_read
Failed to find the path for kernel: No such file or directory
Error: Failed to show lines.
----
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Richard Weinberger <richard@nod.at> Link: http://lkml.kernel.org/r/20150527083718.23880.84100.stgit@localhost.localdomain Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Adrian Hunter [Fri, 22 May 2015 11:53:58 +0000 (14:53 +0300)]
perf tools: Disallow PMU events intel_pt and intel_bts until there is support
Disallow PMU events intel_pt and intel_bts until the tools support them.
By default any PMU is selectable as an event but until the tools have
intel_pt and intel_bts support using them would result in no data being
recorded without any indication as to why.
Before the change:
$ perf record -e intel_bts// sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.008 MB perf.data ]
$ perf report --stdio
Error:
The perf.data file has no samples!
After the change:
$ perf record -e intel_bts// sleep 1
invalid or unsupported event: 'intel_bts//'
Run 'perf list' for a list of valid events
Josef Bacik [Fri, 22 May 2015 13:18:40 +0000 (09:18 -0400)]
perf sched: Add option to merge like comms to lat output
Sometimes when debugging large multi-threaded applications it is helpful
to collate all of the latency numbers into one bulk record to get an
idea of what is going on.
This patch does this by merging any entries that belong to the same comm
into one entry and then spits out those totals.
I've also slightly changed the output so you can see how many threads
were merged in the processing. Here is the new default output format
-----------------------------------------------------------------------------------------------------------
Task | Runtime ms | Switches | Average delay ms | Maximum delay ms | Maximum delay at |
-----------------------------------------------------------------------------------------------------------
chrome:(23) | 740.878 ms | 2612 | avg: 0.022 ms | max: 0.845 ms | max at: 7935.254223 s
pulseaudio:1523 | 94.440 ms | 597 | avg: 0.027 ms | max: 0.110 ms | max at: 7934.668372 s
threaded-ml:6042 | 72.554 ms | 386 | avg: 0.035 ms | max: 1.186 ms | max at: 7935.330911 s
Chrome_IOThread:3832 | 52.388 ms | 456 | avg: 0.021 ms | max: 1.365 ms | max at: 7935.330602 s
Chrome_ChildIOT:(7) | 50.694 ms | 743 | avg: 0.021 ms | max: 1.448 ms | max at: 7935.256659 s
Compositor:5510 | 30.012 ms | 192 | avg: 0.019 ms | max: 0.131 ms | max at: 7936.636815 s
plugin_audio_th:6043 | 24.828 ms | 314 | avg: 0.018 ms | max: 0.143 ms | max at: 7936.205994 s
CompositorTileW:(2) | 14.099 ms | 45 | avg: 0.022 ms | max: 0.153 ms | max at: 7937.521800 s
the (#) after the task is the number of tasks merged, and then if there were
no tasks merged it just shows the pid. Here is the same trace file with the -p
option to print the per-pid latency numbers
-----------------------------------------------------------------------------------------------------------
Task | Runtime ms | Switches | Average delay ms | Maximum delay ms | Maximum delay at |
-----------------------------------------------------------------------------------------------------------
chrome:5500 | 386.872 ms | 387 | avg: 0.023 ms | max: 0.241 ms | max at: 7936.001694 s
pulseaudio:1523 | 94.440 ms | 597 | avg: 0.027 ms | max: 0.110 ms | max at: 7934.668372 s
threaded-ml:6042 | 72.554 ms | 386 | avg: 0.035 ms | max: 1.186 ms | max at: 7935.330911 s
chrome:10226 | 69.710 ms | 251 | avg: 0.023 ms | max: 0.764 ms | max at: 7935.992305 s
chrome:4267 | 64.551 ms | 418 | avg: 0.021 ms | max: 0.294 ms | max at: 7937.862427 s
chrome:4827 | 62.268 ms | 54 | avg: 0.029 ms | max: 0.666 ms | max at: 7935.992813 s
Chrome_IOThread:3832 | 52.388 ms | 456 | avg: 0.021 ms | max: 1.365 ms | max at: 7935.330602 s
chrome:3776 | 46.150 ms | 349 | avg: 0.023 ms | max: 0.845 ms | max at: 7935.254223 s
perf tools: Leave DSO destruction to the map destruction
As the way DSOs are created are normally via dsos__findnew, so that we
don't have to load the same dso multiple times for multiple maps (think
about /lib64/libc.so.6), so they may be shared and dso__delete() should
be left to be done as part of the map destruction process.
This will all be properly solved by reference counting struct dso, which
will be done soon.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-gbrohe1nvkjxw3u5a1bgj3yh@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
in the thread destructor as a debugging check to find out about
possibly still referenced thread instances being deleted, to do that
we need to make sure we use RB_CLEAR_NODE() right after rb_erase(),
i.e. that we use the newly introduced rb_erase_init(), that works
just like list_del_init().
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-4fcqo5ypy1cjjf15ilb0hn78@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf tools: Import rb_erase_init from block/ in the kernel sources
I was assuming rb_erase() was setting things up like list_del_init, but
the fact that thread__delete() was being sucessfull is because the last
thing before deleting is to remove the thread from the
machine->dead_threads list, using list_del_init(), that has the same
effect as using rb_erase_init()...
Introduce this function so that we can use it when removing objects from
rb_trees.
Then we will be able to BUG_ON(still on a list) in destructors.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-55b16mbtndjyd7zzg8nmnamx@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
perf tools: Remove redundant initialization of thread linkage members
A thread moves from a rb tree to a list, but can't be on both, because
those linkage members are in a union. This is leftover from when I was
debugging thread refcounting and had nuked that union.
It is harmless duplication, as RB_CLEAR_NODE() does again what
INIT_LIST_HEAD does.
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Ahern <dsahern@gmail.com> Cc: Don Zickus <dzickus@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Stephane Eranian <eranian@google.com> Link: http://lkml.kernel.org/n/tip-hmma9lmip6qlhzhgkhp9tzd1@git.kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Namhyung Kim [Wed, 20 May 2015 16:03:41 +0000 (01:03 +0900)]
perf tools: Add dso__data_get/put_fd()
Using dso__data_fd() in multi-thread environment is not safe since
returned fd can be closed and/or reused anytime.
So convert it to the dso__data_get/put_fd() pair to protect the access
with lock.
The original dso__data_fd() is deprecated and kept only for testing.
Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1432137821-10853-3-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Namhyung Kim [Wed, 20 May 2015 16:03:40 +0000 (01:03 +0900)]
perf tools: Get rid of dso__data_fd() from dso__data_size()
It seems that the dso__data_fd() was needed to find a binary type
since open in data_file_size() alone used to fail.
But as it can open the dso fine now, the dso__data_fd() can go away.
Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1432137821-10853-2-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
When dso__data_read_offset/addr() is called without prior dso__data_fd()
(or other functions which call it internally), it failed to open dso in
data_file_size() since its binary type was not identified.
However calling dso__data_fd() in dso__data_read_offset() will hurt
performance as it grabs a global lock everytime. So factor out the loop
on the binary type in dso__data_fd(), and call it from both.
Reported-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1432137821-10853-1-git-send-email-namhyung@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Namhyung Kim [Tue, 19 May 2015 08:04:10 +0000 (17:04 +0900)]
perf hists: Reducing arguments of hist_entry_iter__add()
The evsel and sample arguments are to set iter for later use. As it
also receives an iter as another argument, just set them before calling
the function.
Adrian Hunter [Tue, 19 May 2015 13:05:45 +0000 (16:05 +0300)]
perf session: Fix perf_session__peek_event()
perf_session__peek_event() generally leverages there being a single mmap
of the perf.data file, however on 32-bit platforms when there is more
that 32MiB of data, then there are multiple mmaps, so
perf_session__peek_event() reads from the file.
In that case a couple of bugs were exposed (note how the seg. fault
appears with >32M of data):
$ perf record --per-thread -e intel_bts// ../rtit-tests/loopy 1000000
[ perf record: Woken up 13 times to write data ]
[ perf record: Captured and wrote 24.568 MB perf.data ]
$ perf script > /dev/null
$ perf record --per-thread -e intel_bts// ../rtit-tests/loopy 10000000
[ perf record: Woken up 136 times to write data ]
[ perf record: Captured and wrote 270.794 MB perf.data ]
$ perf script > /dev/null
Segmentation fault (core dumped)
The wrong address was being passed to the readn() function and the
buffer size was not being checked.
Adrian Hunter [Tue, 19 May 2015 13:05:43 +0000 (16:05 +0300)]
perf build: Fix libunwind feature detection on 32-bit x86
The libunwind feature would never detect because of the following error:
$ cat tools/build/feature/test-libunwind.make.output
/usr/lib/gcc/i686-linux-gnu/4.8/../../../i386-linux-gnu/libunwind-x86.so: undefined reference to `lzma_stream_buffer_decode'
/usr/lib/gcc/i686-linux-gnu/4.8/../../../i386-linux-gnu/libunwind-x86.so: undefined reference to `lzma_index_uncompressed_size'
/usr/lib/gcc/i686-linux-gnu/4.8/../../../i386-linux-gnu/libunwind-x86.so: undefined reference to `lzma_index_end'
/usr/lib/gcc/i686-linux-gnu/4.8/../../../i386-linux-gnu/libunwind-x86.so: undefined reference to `lzma_index_buffer_decode'
/usr/lib/gcc/i686-linux-gnu/4.8/../../../i386-linux-gnu/libunwind-x86.so: undefined reference to `lzma_stream_footer_decode'
/usr/lib/gcc/i686-linux-gnu/4.8/../../../i386-linux-gnu/libunwind-x86.so: undefined reference to `lzma_index_size'
collect2: error: ld returned 1 exit status
Fix by adding -llzma and re-ordering to match the dependencies.
Adrian Hunter [Tue, 19 May 2015 13:05:44 +0000 (16:05 +0300)]
perf tools: Fix parse_events_error dereferences
Parse errors can be reported in struct parse_events_error but the
pointer passed is optional and can be NULL. Ensure it is not NULL
before dereferencing it.
Adrian Hunter [Tue, 19 May 2015 13:05:42 +0000 (16:05 +0300)]
perf tools: Fix function declarations needed by parse-events.y
Patch "perf tools: Add location to pmu event terms" moved declarations
for parse_events_term__num() and parse_events_term__str() so that they
were no longer visible in parse-events.y. That can result in segfaults
as the arguments no longer need match the function prototype.
Move the declarations back, changing YYLTYPE pointers to
pointers-to-void because YYLTYPE is not generated until parse-events.y
is processed.
Nam T. Nguyen [Mon, 18 May 2015 18:37:27 +0000 (11:37 -0700)]
perf tools: Separate the tests and tools in installation
This refactors out install-bin to install-tests and install-tools so
that downstream could opt to only install the tools, and not the tests.
Signed-off-by: Nam T. Nguyen <namnguyen@chromium.org> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Simon Que <sque@chromium.org> Link: http://lkml.kernel.org/r/1431974247-22275-1-git-send-email-namnguyen@chromium.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Initially, we were trying to guard against scenarios where somebody
attaches to the system with a hardware debugger while PT is enabled
from software and pt_is_running() tries to make sure we handle this
better, but the truth is, there is still a race window no matter what
and people with hardware debuggers should really know what they are
doing anyway.
In other words, there is no point in keeping this one around, and
it's one RDMSR instructions fewer in the fast path.
The case when PT is enabled by the BIOS at boot time is handled
in the driver initialization path and doesn't use pt_is_running().
This patch gets rid of it.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@infradead.org Cc: adrian.hunter@intel.com Cc: hpa@zytor.com Link: http://lkml.kernel.org/r/1429622177-22843-6-git-send-email-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Currently, the description of pt_buffer_reset_offsets() lacks information
about its calling constraints and ordering with regards to other buffer
management functions.
Add a clarification about when this function has to be called.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@infradead.org Cc: adrian.hunter@intel.com Cc: hpa@zytor.com Link: http://lkml.kernel.org/r/1429622177-22843-5-git-send-email-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
The comments in the driver don't make it absolutely clear as to what
exactly is the calling order and other possible constraints of buffer
management functions.
Document constraints and calling order for the buffer configuration
functions. While at it, replace a redundant check in
pt_buffer_reset_markers() with an explanation why it is not needed.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@infradead.org Cc: adrian.hunter@intel.com Cc: hpa@zytor.com Link: http://lkml.kernel.org/r/1429622177-22843-4-git-send-email-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Currently, there's a set-but-not-used variable in setup_topa_index();
this patch gets rid of it. And while at it, fixes a style issue with
brackets around a one-line block.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: acme@infradead.org Cc: adrian.hunter@intel.com Cc: hpa@zytor.com Link: http://lkml.kernel.org/r/1429622177-22843-2-git-send-email-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Don't bother with taking locks if we're not actually going to do
anything. Also, drop the _irqsave(), this is very much only called
from IRQ-disabled context.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
For some obscure reason intel_{start,stop}_scheduling() copy the HT
state to an intermediate array. This would make sense if we ever were
to make changes to it which we'd have to discard.
Except we don't. By the time we call intel_commit_scheduling() we're;
as the name implies; committed to them. We'll never back out.
A further hint its pointless is that stop_scheduling() unconditionally
publishes the state.
So the intermediate array is pointless, modify the state in place and
kill the extra array.
And remove the pointless array initialization: INTEL_EXCL_UNUSED == 0.
Note; all is serialized by intel_excl_cntr::lock.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Peter Zijlstra [Thu, 21 May 2015 08:57:28 +0000 (10:57 +0200)]
perf/x86/intel: Make WARN()ings consistent
The intel_commit_scheduling() callback is pointlessly different from
the start and stop scheduling callback.
Furthermore, the constraint should never be NULL, so remove that test.
Even though we'll never get called (because we NULL the callbacks)
when !is_ht_workaround_enabled() put that test in.
Collapse the (pointless) WARN_ON_ONCE() and bail on !cpuc->excl_cntrs --
this is doubly pointless, because its the same condition as
is_ht_workaround_enabled() which was already pointless because the
whole method won't ever be called.
Furthremore, make all the !excl_cntrs test WARN_ON_ONCE(); they're all
pointless, because the above, either the function
({get,put}_excl_constraint) are already predicated on it existing or
the is_ht_workaround_enabled() thing is the same test.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Peter Zijlstra [Thu, 21 May 2015 10:38:21 +0000 (12:38 +0200)]
perf/x86/intel: Add lockdep assert
Lockdep is very good at finding incorrect IRQ state while locking and
is far better at telling us if we hold a lock than the _is_locked()
API. It also generates less code for !DEBUG kernels.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
Peter Zijlstra [Thu, 21 May 2015 08:57:21 +0000 (10:57 +0200)]
perf/x86/intel: Correct local vs remote sibling state
For some obscure reason the current code accounts the current SMT
thread's state on the remote thread and reads the remote's state on
the local SMT thread.
While internally consistent, and 'correct' its pointless confusion we
can do without.
Flip them the right way around.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>