Well, i'm not sure if there is anything in ANSI standard for what you're requesting.
The main idea of UNICODE is that it uses 2 bytes per character, in order to represent those funny characters in all different "scripts" all over the world (but u probably knew that).
i suggest you write your own conversion function. You should take the following steps:
1. declare a wide char type, something like
typedef short int wchar;
2. the UNICODE format of the string "sachindn" can be:
a. If the machine you're workin' on has a big-endian format, the string would be:
"s\0a\0c\0h\0i\0n\0d0\0n\0\0\0", so each character in ASCII code has a trailing '\0', in, including the eofstring char, which is '\0\0'.
This simply means: in a 2 byte character, set the first 8 bits to the ASCII bits, and the rest to 0.
b. If the machine is in little-endian, the first 8 bits are set to 0, and that would be "\0s\0a\0c..." and so on.
Then, the function should look like:
Code:
wchar * ConvertAsciiToUnicode(char * asciistr)
{ char * retvalue = malloc([b]2[/b]*strlen(asciistr);
for (;*asciistr++;*retvalue++ = *asciistr,retvalue++='0');
*retvalue = 0;
*retvalue++ = 0; /* the end of the string */
return (wchar *)retvalue;
}
This is a little tricky, because you should pay attention, when you allocate a "wide string" from a normal one, its dimension it's twice the normal string, because of the trailing zeroes.
Hope this helps.
I'm not sure yet how you can determine the endian format of the machine.
Regards. [sig][/sig]