Bit_Hacker Posted May 27, 2012 Posted May 27, 2012 (edited) I wrote a small program to explain the type sizes to myself. But while reading a few reverse engineering books I have became confused. Inside one of the chapters I ran into this statement: << The word is understood not only as 2 bytes, but also as the size of operands by default; in 32 bit mode, the word equals 4 bytes. >> like repe movsd (repeat equal) (movsd is a dword) which is 2 words. which apparently according to the text would be 8 bytes. but in my program a dword is 4 bytes... ( Please someone straighten me out... lol ) Can someone explain this to me? #include <iostream>#include <windows.h>using namespace std;int main(){cout << "Size of char = " << sizeof(char) << endl;cout << "Size of int = " << sizeof(int) << endl;cout << "Size of short int = " << sizeof(short int) << endl;cout << "Size of unsigned int = " << sizeof(unsigned int) << endl;cout << "Size of signed int = " << sizeof(signed int) << endl;cout << "Size of long = " << sizeof(long) << endl;cout << "Size of bool = " << sizeof(bool) << endl;cout << "Size of float = " << sizeof(float) << endl;cout << "Size of double = " << sizeof(double) << endl;cout << "Size of long double = " << sizeof(long double) << endl;cout << "Size of unsigned short = " << sizeof(unsigned short) << endl;cout << "Size of signed short = " << sizeof(signed short) << endl;cout << "Size of unsigned long = " << sizeof(unsigned long) << endl;cout << "Size of signed long = " << sizeof(signed long) << endl;cout << "Size of DWORD = " << sizeof(DWORD) << endl;cout << "Size of WORD = " << sizeof(WORD) << endl;cout << "Size of BYTE = " << sizeof(BYTE) << endl;cout << "Size of TRUE = " << sizeof(TRUE) << endl;cout << "Size of FALSE = " << sizeof(FALSE) << endl; cin.ignore(2);return 0;} Output of program: I just came across this: http://www.swansontec.com/sintel.html << To keep things simple, this article uses the term "word" to mean the size of a large operand. If your application runs under DOS, a "word" is 16 bits, but if your application runs under Windows, a "word" is 32 bits. >> I'm still confused even by that snippet. It says a word is == 4 bytes. 32 bits = 4 bytes So, my program is saying a dword is 4 bytes.... So something fishy is going on.... Edited May 27, 2012 by Bit_Hacker 1
T-rad Posted May 27, 2012 Posted May 27, 2012 The link if for understanding instruction sizes. I only skimmed though the link, but it looks like they are talking about the sizes of the operands and the instructions.example: MOV AL,0B2even though AL = 1 BYTE, the entire instruction ( B0 B2 ) is 2 BYTESIf you just want the size of the type then your program is correct.
ghandi Posted May 27, 2012 Posted May 27, 2012 (edited) In computing, word is a term for the natural unit of data used by a particular processor design. A word is basically a fixed sized group of bits that are handled as a unit by the instruction set and/or hardware of the processor. The number of bits in a word (the word size, word width, or word length) is an important characteristic of a specific processor design or computer architecture.The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word sized and the largest piece of data that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (in other words, the full-sized natural word of the processor, as opposed to any other definition used).Modern processors, including embedded systems, usually have a word size of 8, 16, 24, 32 or 64 bits, while modern general purpose computers usually use 32 or 64 bits. Special purpose digital processors, such as DSPs for instance, may use other sizes and many different sizes have been used historically, including 8, 9, 12, 18, 24, 36, 39, 40, 48 and 60 bits. The slab is an example of a system with an earlier word size. Several of the earliest computers (and a few modern as well) used BCD rather than plain binary, typically having a word size of 10 or 12 decimal digits, and some early decimal computers had no fixed word length at all.The size of a word can sometimes differ from the expected due to backward compatibility with earlier computers. If multiple compatible variations or a family of processors share a common architecture and instruction set but differ in their word sizes, their documentation and software may become notationally complex to accommodate the difference (see Size families below).http://en.wikipedia....er_architecture)Does not a single person bother to use a search engine anymore?HR,GhandiPS: The reason your application will output different results than you might expect is that the context of 'WORD' is being mixed here...The definition of 'WORD' in most compilers/assemblers is flexible, as such it is also a definition given for the compiler and there can be 128, 64, 32 ,16 bit words, etc. Although they may coincide as the same size/value, the native word size of the processor is not necessarily the defined size of a 'WORD' in a compiler/assembler. Edited May 27, 2012 by ghandi 1
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now