# 9. Good practices¶

## 9.1. Rules¶

To limit or avoid issues with Unicode, try to follow these rules:

• decode all bytes data as early as possible: keyboard strokes, files, data received from the network, …
• encode back Unicode to bytes as late as possible: write text to a file, log a message, send data to the network, …
• always store and manipulate text as character strings
• if you have to encode text and you can choose the encoding: prefer the UTF-8 encoding. It is able to encode all Unicode 6.0 characters (including non-BMP characters), does not depend on endianness, is well supported by most programs, and its size is a good compromise.

## 9.2. Unicode support levels¶

There are different levels of Unicode support:

• don’t support Unicode: only work correctly if all inputs and outputs are encoded to the same encoding, usually the locale encoding, use byte strings.
• basic Unicode support: decode inputs and encode outputs using the correct encodings, usually only support BMP characters. Use Unicode strings, or byte strings with the locale encoding or, better, an encoding of the UTF family (e.g. UTF-8).
• full Unicode support: have access to the Unicode database, normalize text, render correctly bidirectional texts and characters with diacritics.

These levels should help you to estimate the status of the Unicode support of your project. Basic support is enough if all of your users speak the same language or live in close countries. Basic Unicode support usually means excellent support of Western Europe languages. Full Unicode support is required to support Asian languages.

By default, the C, C++ and PHP5 languages have basic Unicode support. For the C and C++ languages, you can have basic or full Unicode support using a third-party library like glib, Qt or ICU. With PHP5, you can have basic Unicode support using “mb_” functions.

By default, the Python 2 language doesn’t support Unicode. You can have basic Unicode support if you store text into the unicode type and take care of input and output encodings. For Python 3, the situation is different: it has direct basic Unicode support by using the wide character API on Windows and by taking care of input and output encodings for you (e.g. decode command line arguments and environment variables). The unicodedata module is a first step for a full Unicode support.

Most UNIX and Windows programs don’t support Unicode. Firefox web browser and OpenOffice.org office suite have full Unicode support. Slowly, more and more programs have basic Unicode support.

Don’t expect to have full Unicode support directly: it requires a lot of work. Your project may be fully Unicode compliant for a specific task (e.g. filenames), but only have basic Unicode support for the other parts of the project.

## 9.3. Test the Unicode support of a program¶

Tests to evaluate the Unicode support of a program:

• Write non-ASCII characters (e.g. é, U+00E9) in all input fields: if the program fails with an error, it has no Unicode support.
• Write characters not encodable to the locale encoding (e.g. Ł, U+0141) in all input fields: if the program fails with an error, it probably has basic Unicode support.
• To test if a program is fully Unicode compliant, write text mixing different languages in different directions and characters with diacritics, especially in Persian characters. Try also decomposed characters, for example: {e, U+0301} (decomposed form of é, U+00E9).

## 9.4. Get the encoding of your inputs¶

Console:

File formats:

• XML: the encoding can be specified in the <?xml ...?> header, use UTF-8 if the encoding is not specified. For example, <?xml version="1.0" encoding="iso-8859-1"?>.
• HTML: the encoding can be specified in a “Content type” HTTP header, e.g. <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">. If it is not, you have to guess the encoding.

Filesystem (filenames):

## 9.5. Switch from byte strings to character strings¶

Use character strings, instead of byte strings, to avoid mojibake issues.