So, its important to realize that the two
separate concepts here of kind of, by.total and by self.
The basic idea is that by total, I, I mean, the, the normalizing
by the total amount of time spent in a function gives you basically,
how much time was be, was spent that that how many basically, how
many times that function appeared in the calls, in the kind of printout here.
And so for example, a 100% of your time is spent in the top-level
function, right, so the function that you call, suppose it's lm, you spend a
100% of your time in that function, because it was at the top level.
And so, but the reality is that often
the top level functions don't really do anything with
that's kind of important, all they do is they
call helper functions that do the real work, right?
So chances are if your function is spending
a lot of time doing something, it's spending
a lot of time in those helper functions which is just being called by this top
function to kind of do, to do all the work.
And so often it's not very interesting to know how much is time is spent in
these top level functions, because that's not where
the, where the real, where the real work occurs.
All right, so you really want to know kind of how much time is spent in the
top level function, but subtracting out all
the low, the functions that it calls right?
So it turns out that it spends a lot of time in the
top level function, but even after you subtract out all of the lower level
functions, then that has something that's interesting.
But most of the time you will notice that when you subtract out all the lower level
functions that get, that get called there's very
little time it spends in the top level function.
And because all the work and all the kind of the computations is being done
at the lower level function, so that's, that's
kind of where you want to focus your efforts.
So, the, the buy.self format is, I, I think, the most
interesting format to use because it tells you how much time
is being spent in a given function, but after subtracting out all of the
other, all of the time spent in, in lower level functions that it calls.
So it gives you I think a more accurate
picture of, you know, which functions are really, are truly
taking up the most amount of time and which functions
that you might want to target for optimization, later on.