Tuts 4 You

# Local variables definition versus class (instance) variable definition speed:

## Recommended Posts

Local variables definition versus class (instance) variable definition speed:

int K2[3] = { 0xd76aa478, 0xe8c7b756, 0x242070db};

void Function()
{

int K1[3] = { 0xd76aa478, 0xe8c7b756, 0x242070db};

int before2 = GetTickCount();

for (int ki2=0;ki2<1000000;ki2++)
{
for (int j=0;j<3;j++)
{
int a = K2[j];
}
}

int after2 = GetTickCount();

int before1 = GetTickCount();

for (int ki1=0;ki1<1000000;ki1++)
{
for (int j=0;j<3;j++)
{
int a = K1[j];
}
}

int after1 = GetTickCount();
// the results:
int dif1 = after1-before1;
int dif2 = after2-before2;

}

K1 version (local variable version) speed suppose to be much faster since is defined on stack,
sometimes: dif1 = 0x01F, dif2 = 0x10; sometimes have very close values and are even equal;
and sometimes (rare) even dif1 = 0 while dif2 = 0x13
Can someone explain what's going on?

##### Link to post

Found something interesting:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms724408(v=vs.85).aspx
The resolution of the GetTickCount function is limited to the resolution of the system timer, which is typically in the range of 10 milliseconds to 16 milliseconds.

https://docs.microsoft.com/en-us/windows/desktop/api/timeapi/nf-timeapi-timegettime
The default precision of the timeGetTime function can be five milliseconds or more, depending on the machine.

Replaced GetTickCount with timeGetTime and result are a just bit better:
I just have to add "#include <mmsystem.h>"

I guess you shouldn't rely on GetTickCount/timeGetTime that much!

##### Link to post

The optimizer is just going to delete those loops. Even if it didnt there wouldnt be a difference because memory is just memory and these ints will just end up in the cache / inlined. What are you trying to find out exactly?

Edited by deepzero (see edit history)

##### Link to post
Quote

What are you trying to find out exactly?

I wanna know about the speed of different data types declarations, from your reply: it doesn't matter how I declare them?

##### Link to post

First check your compiled program in olly/64dbg to know if the loops are there in assembly code or have been optimized/removed. If loops are removed, try removing compiler optimization and run it again to see if anything new happens? You can try any of these options.

1. If you are using gcc, then try running with gcc -o0. I don't know about msvc.

2. Declare the target variables with volatile keyword, like "volatile int a = k2[ j ]"

3. Write some complex statement inside the loops so that compiler doesn't dare to optimize, or something that creates side effect, like "printf(".")".

Now recompile and check the assembly codes again to make sure loops are there.

If you don't get any difference in time again, then maybe this wasn't what you were looking for. Personally I am not sure about the speed difference in such case, so far I only know about difference when you write it like,

int a = 2;

and

class myClass;

int a = myClass.b;

##### Link to post

If you are looking to benchmark speed, you should use a low-level low-resolution timer instead of higher level API like GetTickCount and timeGetTime.

##### Link to post

Doing something much complex like MD5 10000 times with local variables and instance variables:

Local variables ms difference:
0033b
0034a
0035a

Instance variables ms difference:
33a
33b

So there is almost no difference actually.

12 minutes ago, atom0s said:

If you are lo﻿oking to benchmark speed, you should use a low-level low-resolution timer instead of higher level API like GetTickCount and timeGetTime.  ﻿

If the difference is that low it doesn't matter anyway!

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account