perf: Fix branch stack refcount leak on callchain init failure
authorFrederic Weisbecker <fweisbec@gmail.com>
Tue, 23 Jul 2013 00:30:59 +0000 (02:30 +0200)
committerIngo Molnar <mingo@kernel.org>
Tue, 30 Jul 2013 20:22:58 +0000 (22:22 +0200)
On callchain buffers allocation failure, free_event() is
called and all the accounting performed in perf_event_alloc()
for that event is cancelled.

But if the event has branch stack sampling, it is unaccounted
as well from the branch stack sampling events refcounts.

This is a bug because this accounting is performed after the
callchain buffer allocation. As a result, the branch stack sampling
events refcount can become negative.

To fix this, move the branch stack event accounting before the
callchain buffer allocation.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1374539466-4799-2-git-send-email-fweisbec@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/events/core.c

index 1274114..f35aa7e 100644 (file)
@@ -6567,6 +6567,12 @@ done:
                        atomic_inc(&nr_comm_events);
                if (event->attr.task)
                        atomic_inc(&nr_task_events);
+               if (has_branch_stack(event)) {
+                       static_key_slow_inc(&perf_sched_events.key);
+                       if (!(event->attach_state & PERF_ATTACH_TASK))
+                               atomic_inc(&per_cpu(perf_branch_stack_events,
+                                                   event->cpu));
+               }
                if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) {
                        err = get_callchain_buffers();
                        if (err) {
@@ -6574,12 +6580,6 @@ done:
                                return ERR_PTR(err);
                        }
                }
-               if (has_branch_stack(event)) {
-                       static_key_slow_inc(&perf_sched_events.key);
-                       if (!(event->attach_state & PERF_ATTACH_TASK))
-                               atomic_inc(&per_cpu(perf_branch_stack_events,
-                                                   event->cpu));
-               }
        }
 
        return event;