It's a standard problem due to how the computer stores floating point values: Float/double are stored as binary fractions, not decimal fractions. The following examples illustrate the what happens in computre:
12.34 in decimal notation (what we use) means 1*101+2*100+3*10-1+4*10-2.
The computer stores floating point numbers in the same way, except it uses base 2: the base 10 number 2.25 can be convertd to base 2 number 10.01, which means 1*21+0*20+0*2-1+1*2-2
Now, you probably know that there are some numbers that cannot be represented fully with our decimal notation. For example, 1/3 in decimal notation is 0.3333333..., the same thing happens in binary notation, except that the numbers that cannot be represented precisely are different. Among them is the number 1/10. In binary notation that is 0.000110011001100...
In summary, a float/double can't store 0.1 precisely. It will always be a little off.
You can try using the decimal type which stores numbers in decimal notation. Thus 0.1 will be representable precisely.
Since the binary notation cannot store it precisely, it is stored in a rounded-off way. This is the cause to the above problem.