I was trying to convert characters to binary string form like "A" to "01000001". To implement this, I used the following C++ code
#include <iostream>
#include <string>
#include <bitset>
using namespace std;
int main() {
string message = "A";
string binaryString = "";
for (char character : message) {
binaryString += bitset<8>(character).to_string();
}
cout << binaryString << endl;
return 0;
}
However, when I put a non-ASCII character(s) like "가"(string message = "가";), It shows wrong binary translations. For example, a non-ASCII character "가" should be translated to 1010110000000000(U+AC00) but the suggested code shows 111010101011000010000000, which is obviously different.
I want to know which points are wrong in the suggested code. Or, is there another way to conveniently convert from UTF-8 characters to binary forms?
>Solution :
For the character: 가
-
UTF-16 encoding is:
std::wstring message = L"\uAC00"Notice that I have it declared as a wide string. It could also be….std::u16string = u"\uAC00"; -
UTF-8 encoding is
std::string message = "\xEA\xB0\x80";(3 bytes), which is that111010101011000010000000string you mention above
In general it’s not a good idea to have hardcoded non-ascii literals saved into source files. Otherwise, you rely on your IDE or text editor (and source code repository tools) to save it in the expected encoding. It only takes one person on your team with different text editor to accidently save the original source file in a different encoding and not notice.