对std::string和std::wstring区别的解释,807个赞同,有例子
2017-09-25 16:54
162 查看
807down vote |
|
shareimprove this answer | edited May 23 at 12:02 Community♦ 11 | answered Dec 31 '08 at 12:47 paercebal 57.7k32110146 |
| @Sorin Sbarnea: UTF-8 could take 1-6 bytes, but apparently the standard limits it to 1-4. See en.wikipedia.org/wiki/UTF8#Description for more information. – paercebal Jan 13 '10 at 13:10 | ||
| While this examples produces different results on Linux and Windows the C++ program contains implementation-defined behavior as to whether olèis encoded as UTF-8 or not. Further more, the reason you cannot natively stream wchar_t *to std::coutis because the types are incompatible resulting in an ill-formed program and it has nothing to do with the use of encodings. It's worth pointing out that whether you use std::stringor std::wstringdepends on your own encoding preference rather than the platform, especially if you want your code to be portable. – John Leidegren Aug 9 '12 at 9:37 | ||
| @paercebal Whatever the platform supports is entirely arbitrary and besides the point. If you store all strings internally as UTF-8 on Windows you'll have to convert them to either ANSI or UTF-16 and call the corresponding Win32 function but if you know your UTF-8 strings are just plain ASCII strings you don't have to do anything. The platform doesn't dictate how you use strings as much as the circumstances. – John Leidegren Aug 9 '12 at 16:35 | ||
| Windows actually uses UTF-16 and have been for quite some time, older versions of Windows did use UCS-2 but this is not the case any longer. My only issue here is the conclusion that std::wstringshould be used on Windows because it's a better fit for the Unicode Windows API which I think is fallacious. If your only concern was calling into the Unicode Windows API and not marshalling strings then sure but I don't buy this as the general case. – John Leidegren Aug 9 '12 at 18:15 | ||
| @ John Leidegren : If your only concern was calling into the Unicode Windows API and not marshalling strings then sure: Then, we agree. I'm coding in C++, not JavaScript. Avoiding useless marshalling or any other potentially costly processing at runtime when it can be done at compile time is at the heart of that language. Coding against WinAPI and using std::stringis just an unjustified wasting runtime resources. You find it fallacious, and it's Ok, as it is your viewpoint. My own is that I won't write code with pessimization on Windows just because it looks better from the Linux side. – paercebal Aug 9 '12 at 19:48 |
up vote33down vote | So, every reader here now should have a clear understanding about the facts, the situation. If not, then you must read paercebal's outstandingly comprehensive answer [btw: thanks!]. My pragmatical conclusion is shockingly simple: all that C++ (and STL) "character encoding" stuff is substantially broken and useless. Blame it on Microsoft or not, that will not help anyway. My solution, after in-depth investigation, much frustration and the consequential experiences is the following: accept, that you have to be responsible on your own for the encoding and conversion stuff (and you will see that much of it is rather trivial) use std::string for any UTF-8 encoded strings (just a typedef std::string UTF8String) accept that such an UTF8String object is just a dumb, but cheap container. Do never ever access and/or manipulate characters in it directly (no search, replace, and so on). You could, but you really just really, really do not want to waste your time writing text manipulation algorithms for multi-byte strings! Even if other people already did such stupid things, don't do that! Let it be! (Well, there are scenarios where it makes sense... just use the ICU library for those). use std::wstring for UCS-2 encoded strings ( typedef std::wstring UCS2String) - this is a compromise, and a concession to the mess that the WIN32 API introduced). UCS-2 is sufficient for most of us (more on that later...). use UCS2String instances whenever a character-by-character access is required (read, manipulate, and so on). Any character-based processing should be done in a NON-multibyte-representation. It is simple, fast, easy. add two utility functions to convert back & forth between UTF-8 and UCS-2: UCS2String ConvertToUCS2( const UTF8String &str ); UTF8String ConvertToUTF8( const UCS2String &str ); The conversions are straightforward, google should help here ... That's it. Use UTF8String wherever memory is precious and for all UTF-8 I/O. Use UCS2String wherever the string must be parsed and/or manipulated. You can convert between those two representations any time. Alternatives & Improvements conversions from & to single-byte character encodings (e.g. ISO-8859-1) can be realized with help of plain translation tables, e.g. const wchar_t tt_iso88951[256] = {0,1,2,...};and appropriate code for conversion to & from UCS2. if UCS-2 is not sufficient, than switch to UCS-4 ( typedef std::basic_string<uint32_t> UCS2String) ICU or other unicode libraries? For advanced stuff.
| ||||||||||||||||||||
|
up vote26down vote | I recommend avoiding std::wstringon Windows or elsewhere, except when required by the interface, or anywhere near Windows API calls and respective encoding conversions as a syntactic sugar. My view is summarized in http://utf8everywhere.org of which I am a co-author. Unless your application is API-call-centric, e.g. mainly UI application, the suggestion is to store Unicode strings in std::string and encoded in UTF-8, performing conversion near API calls. The benefits outlined in the article outweigh the apparent annoyance of conversion, especially in complex applications. This is doubly so for multi-platform and library development. And now, answering your questions: A few weak reasons. It exists for historical reasons, where widechars were believed to be the proper way of supporting Unicode. It is now used to interface APIs that prefer UTF-16 strings. I use them only in direct vicinity of such API calls. This has nothing to do with std::string. It can hold whatever encoding you put in it. The only question is how You treat it's content. My recommendation is UTF-8, so it will be able to hold all unicode characters correctly. It's a common practice on Linux, but I think Windows programs should do it also. No. Wide character is a confusing name. In the early days of Unicode, there was a belief that character can be encoded in two bytes, hence the name. Today, it stands for "any part of the character that is two bytes long". UTF-16 is seen as a sequence of such byte pairs (aka Wide characters). A character in UTF-16 takes either one or two pares.
| |||
add a comment |
up vote22down vote | When you want to have wide characters stored in your string. widedepends on the implementation. Visual C++ defaults to 16 bit if i remember correctly, while GCC defaults depending on the target. It's 32 bits long here. Please note wchar_t (wide character type) has nothing to do with unicode. It's merely guaranteed that it can store all the members of the largest character set that the implementation supports by its locales, and at least as long as char. You can store unicode strings fine into std::stringusing the utf-8encoding too. But it won't understand the meaning of unicode code points. So str.size()won't give you the amount of logical characters in your string, but merely the amount of char or wchar_t elements stored in that string/wstring. For that reason, the gtk/glib C++ wrapper folks have developed a Glib::ustringclass that can handle utf-8. If your wchar_t is 32 bits long, then you can use utf-32as an unicode encoding, and you can store and handle unicode strings using a fixed (utf-32 is fixed length) encoding. This means your wstring's s.size()function will then return the right amount of wchar_t elements andlogical characters. Yes, char is always at least 8 bit long, which means it can store all ASCII values. Yes, all major compilers support it.
| ||||||||||||||||||||
|
up vote5down vote | I frequently use std::string to hold utf-8 characters without any problems at all. I heartily recommend doing this when interfacing with API's which use utf-8 as the native string type as well. For example, I use utf-8 when interfacing my code with the Tcl interpreter. The major caveat is the length of the std::string, is no longer the number of characters in the string.
| ||||||||||||||||||||
|
up vote2down vote | When you want to store 'wide' (Unicode) characters. Yes: 255 of them (excluding 0). Yes. Here's an introductory article: http://www.joelonsoftware.com/articles/Unicode.html
| ||||||||||||
|
up vote2down vote | Applications that are not satisfied with only 256 different characters have the options of either using wide characters (more than 8 bits) or a variable-length encoding (a multibyte encoding in C++ terminology) such as UTF-8. Wide characters generally require more space than a variable-length encoding, but are faster to process. Multi-language applications that process large amounts of text usually use wide characters when processing the text, but convert it to UTF-8 when storing it to disk. The only difference between a stringand a wstringis the data type of the characters they store. A string stores chars whose size is guaranteed to be at least 8 bits, so you can use strings for processing e.g. ASCII, ISO-8859-15, or UTF-8 text. The standard says nothing about the character set or encoding. Practically every compiler uses a character set whose first 128 characters correspond with ASCII. This is also the case with compilers that use UTF-8 encoding. The important thing to be aware of when using strings in UTF-8 or some other variable-length encoding, is that the indices and lengths are measured in bytes, not characters. The data type of a wstring is wchar_t, whose size is not defined in the standard, except that it has to be at least as large as a char, usually 16 bits or 32 bits. wstring can be used for processing text in the implementation defined wide-character encoding. Because the encoding is not defined in the standard, it is not straightforward to convert between strings and wstrings. One cannot assume wstrings to have a fixed-length encoding either. If you don't need multi-language support, you might be fine with using only regular strings. On the other hand, if you're writing a graphical application, it is often the case that the API supports only wide characters. Then you probably want to use the same wide characters when processing the text. Keep in mind that UTF-16 is a variable-length encoding, meaning that you cannot assume length()to return the number of characters. If the API uses a fixed-length encoding, such as UCS-2, processing becomes easy. Converting between wide characters and UTF-8 is difficult to do in a portable way, but then again, your user interface API probably supports the conversion.
| ||||||||||||||||
|
up vote1down vote | 1) As mentioned by Greg, wstring is helpful for internationalization, that's when you will be releasing your product in languages other than english 4) Check this out for wide character http://en.wikipedia.org/wiki/Wide_character
| ||
add a comment |
up vote0down vote | when you want to use Unicode strings and not just ascii, helpful for internationalisation yes, but it doesn't play well with 0 not aware of any that don't wide character is the compiler specific way of handling the fixed length representation of a unicode character, for MSVC it is a 2 byte character, for gcc I understand it is 4 bytes. and a +1 for http://www.joelonsoftware.com/articles/Unicode.html
| ||||||||||||||||||||
|
up vote0down vote | A good question! I think DATA ENCODING (sometime CHARSET also involved) is a MEMORY EXPRESSION MECHANISM in order to save data to file or transfer data via network, so I answer this question as: 1.When should I use std::wstring over std::string? If the programming platform or API function is a single-byte one, and we want to process or parse some unicode datas, e.g read from Windows' .REG file or network 2-byte stream, we should declare std::wstring variable to easy process them. e.g.: wstring ws=L"中国a"(6 octets memory: 0x4E2D 0x56FD 0x0061), we can use ws[0] to get character '中' and ws[1] to get character '国' and ws[2] to get character 'a', etc. 2.Can std::string hold the entire ASCII character set, including the special characters? Yes. But notice: American ASCII, means each 0x00~0xFF octet stand for one character ,including printable text such as "123abc&*_&" and you said special one, mostly print it as a '.' avoid confusing editors or terminals. And some other countries extend their own "ASCII" charset ,e.g. Chinese, use 2 octets to stand for one character. 3.Is std::wstring supported by all popular C++ compilers? Maybe, or mostly. I have used: VC++6 and GCC 3.3, YES 4.What is exactly a "wide character"? wide character mostly indicate using 2 octets or 4 octets to hold all countries's characters. 2 octets UCS2 is a representative sample, and further e.g. English 'a', its memory is 2 octet of 0x0061(vs in ASCII 'a's memory is 1 octet 0x61) |
相关文章推荐
- CString CStringA CStringW std::string std::wstring之间的区别与联系
- std::string和std::wstring声明和用法解释
- C++字符串:string and wstring的区别,非常重要!!!!
- BLOCKED,WAITING,TIMED_WAITING有什么区别?-用生活的例子解释
- wchar_t*,wchar_t,wchat_t数组,char,char*,char数组,std::string,std::wstring,CString....转换
- std::string的reserve()和resize()函数的区别
- 几种C++ std::string和std::wstring相互转换的转换方法
- C#中string和String的区别【终极解释】
- C++字符串:string and wstring的区别
- VS2005:C++ std::string, std::wstring转换方法
- std::string, std::wstring, wchar_t*, Platform::String^ 之间的相互转换
- 去掉std::string或std::wstring的最后一个字符的简单方法
- 【读书笔记】std::string的基本操作 与字符串数组的区别
- VS2005:C++ std::string, std::wstring转换方法
- 几种C++ std::string和std::wstring相互转换的转换方法
- std::string 的length()与size()方法没有区别
- UnicodeToMultiByte,ConvertBSTRToString,std::string,CString的区别
- 几种C++ std::string和std::wstring相互转换的转换方法
- wchar_t*,wchar_t,wchat_t数组,char,char*,char数组,std::string,std::wstring,CString....
- std::string、std::wstring的关系