Hi all, Quick question: How does a .net
decimal
type memory get bipartisan in representation
?
We all know how floating-point numbers are deposited and due to the reasons for disqualification, but I do not have any information about decimal
except for the following
Is there any way I can understand I am In me computer scientists demand answers and after an hour of research, I can not find it. It seems that there are so many futile bits or I'm just showing this wrong in my head, can someone please shed some light on this? Thank you.
For information that you want.
Actually this is a 96 bit integer, as a tetra, a signature bit, plus an exponent to say how many decimal place moves it to the right place.
To represent 3.261, you get an intent of a mantissa 3261, 0, a sign bit of (i.e. positive), and 3. Note that the decimal is not normal (deliberately) so you can use also 3.2610 using a mantissa of 32610 and an exponent 4, for example.
I have some more information in my article.
Comments
Post a Comment