:meta-keywords: cubrid strings, charset, codeset, collation, cubrid locale :meta-description: Globalization includes internationalization and localization. Internationalization can be applied to various languages and regions. Localization fits the language and culture in a specific area as appending the language-specific components. CUBRID supports multilingual collations including Europe and Asia to facilitate the localization. .. _globalization-overview: An Overview of Globalization ============================ Character data -------------- Character data (strings) may be stored with VARCHAR(STRING), CHAR, ENUM, and they support charset and collation. Charset(character set, codeset) controls how characters are stored (on any type of storage) as bytes, or how a series of bytes forms a character. CUBRID supports ISO-88591-1, UTF-8 and EUC-KR charsets. For UTF-8, we support only the Unicode characters up to codepoint 10FFFF (encoded on up to four bytes). For instance, the character "?" is encoded in codeset ISO-8859-1 using a single byte (C7), in UTF-8 is encoded with 2 bytes (C3 88), while in EUC-KR this character is not available. Collation decides how strings compare. Most often, users require case insensitive and case sensitive comparisons. For instance, the strings "ABC" and "abc" are equal in a case insensitive collation, while in a case sensitive collation, they are not, and depending on other collation settings, the relationship can be "ABC" < "abc" , or "ABC" > "abc". Collation means more than comparing character casing. Collation decides the relationship between two strings (greater, lower, equal), is used in string matching (LIKE), or computing boundaries in index scan. In CUBRID, a collation implies a charset. For instance, collations "utf8_en_ci" and "iso88591_en_ci" perform case insensitive compare, but operate on different charsets. Although for ASCII range, in these particular cases the results are similar, the collation with "utf8_en_ci" is slower, since it needs to work on variable number of bytes (UTF-8 encoding). * "'a' COLLATE iso88591_en_ci" indicates "_iso88591'a' COLLATE iso88591_en_ci". * "'a' COLLATE utf8_en_ci" indicates "_utf8'a' COLLATE utf8_en_ci". All string data types support precision. Special care is required with fixed characters(CHAR). The values of this types are padded to fill up the precision. For instance, inserting "abc" into a CHAR(5) column, will result in storing "abc " (2 padding spaces are added). Space (ASCII 32 decimal, Unicode 0020) is the padding character for most charsets. But, for EUC-KR charset, the padding consists of one character which is stored with two bytes (A1 A1). Related Terms ------------- .. In the below, currency part is removed because MONETARY is deprecated. * **Character set**: A group of encoded symbols (giving a specific number to a certain symbol) * **Collation**: A set of rules for comparison of characters in the character set and for sorting data * **Locale**: A set of parameters that defines any special variant preferences such as number format, calendar format (month and day in characters), date/time format, and collation depending on the operator's language and country. Locale defines the linguistic localization. Character set of locale defines how the month in characters and other data are encoded. A locale identifier consists of at least a language identifier and a region identifier, and it is expressed as language[_territory][.codeset] (For example, Australian English using UTF-8 encoding is written as en_AU.UTF-8). * **Unicode normalization**: The specification by the Unicode character encoding standard where some sequences of code points represent essentially the same character. CUBRID uses Normalization Form C (NFC: codepoint is decomposed and then composed) for input and Normalization Form D (NFD: codepoint is composed and then decomposed) for output. However, CUBRID does not apply the canonical equivalence rule as an exception. For example, canonical equivalence is applied in general NFC rule so codepoint 212A (Kelvin K) is converted to codepoint 4B (ASCII code uppercase K). Since CUBRID does not perform the conversion by using the canonical equivalence rule to make normalization algorithm quicker and easier, it does not perform reverse-conversion, too. * **Canonical equivalence**: A basic equivalence between characters or sequences of characters, which cannot be visually distinguished when they are correctly rendered. For example, let's see '?' ('A' with an angstrom). '?' (Unicode U + 212B) and Latin 'A' (Unicode U + 00C5) have same A and different codepoints, however, the decomposed result is 'A' and U+030A, so it is canonical equivalence. * **Compatibility equivalence**: A weaker equivalence between characters or sequences of characters that represent the same abstract character. For example, let's see number '2' (Unicode U + 0032) and superscript '?'(Unicode U + 00B2). '?' is a different format of number '2', however, it is visually distinguished and has a different meaning, so it is not canonical equivalence. When normalizing '2?' with NFC, '2?' is maintained since it uses canonical equivalence. However, with NFKC, '?' is decomposed to '2' which is compatibility equivalence and then it can be recomposed to '22'. Unicode normalization of CUBRID does not apply the compatibility equivalence rule. For explanation on Unicode normalization, see :ref:`unicode-normalization`. For more details, see http://unicode.org/reports/tr15/. The default value of the system parameter related to Unicode normalization is unicode_input_normalization=no and unicode_output_normalization=no. For a more detailed description on parameters, see :ref:`stmt-type-parameters`. Locale Attributes ----------------- Locale is defined by following attributes. .. ", monetary currency" is removed from above: MONETARY is deprecated. * **Charset (codeset)**: How bytes are interpreted into single characters (Unicode codepoints) * **Collations**: Among all collations defined in locale of `LDML(UNICODE Locale Data Markup Language) `_ file, the last one is the default collation. Locale data may contain several collations. * **Alphabet (casing rules)**: One locale data may have up 2 alphabets, one for identifier and one for user data. One locale data can have two types of alphabets. * **Calendar**: Names of weekdays, months, day periods (AM/PM) * **Numbering settings**: Symbols for digit grouping * **Text conversion data**: For CSQL conversion. Option. * **Unicode normalization data**: Data converted by normalizing several characters with the same shape into one based on a specified rule. After normalization, characters with the same shape will have the same code value even though the locale is different. Each locale can activate/deactivate the normalization functionality. .. note:: Generally, locale supports a variety of character sets. However, CUBRID locale supports both ISO and UTF-8 character sets for English and Korean. The other operator-defined locales using the LDML file support the UTF-8 character set only. .. _collation-properties: Collation Properties -------------------- A collation is an assembly of information which defines an order for characters and strings. In CUBRID, collation has the following properties. * **Strength**: This is a measure of how "different" basic comparable items (characters) are. This affects selectivity. In LDML files, collation strength is configurable and has four levels. For example a Case insensitive collation should be set with level = "secondary" (2) or "primary" (1). * Whether it supports or not **expansions** and **contractions** Each column has a collation, so when applying :func:`LOWER`, :func:`UPPER` functions the casing rules of locale which defines the collation's default language is used. Depending on collation properties some CUBRID optimizations may be disabled for some collations: * **LIKE** rewrite: is disabled for collations which maps several different character to the same weight (case insensitive collations for example) and for collations with expansions. * Covering index scan: disabled for collations which maps several different character to the same weight (see :ref:`covering-index`). For more information, see :ref:`collation-setting-impacted-features` . .. _collation-naming-rules: Collation Naming Rules ---------------------- The collation name in CUBRID follows the conversion: :: ____... * : The full charset name as used by CUBRID. iso88591, utf8, euckr. * : a region/language specific. The language code is expected as two characters; en, de, es, fr, it, ja, km, ko, tr, vi, zh, ro. "gen" if it does not address a specific language, but a more general sorting rule. * __...: They have the following meaning. Most of them apply only to LDML collations. * ci: case insensitive In LDML, can be obtained using the settings: strength="secondary" caseLevel="off" caseFirst="off". * cs: case sensitive; By default all collations are case sensitive. In LDML, can be obtained using at least: strength="tertiary". * bin: it means that the sorting order under such collation is almost the same with the order of codepoints; If memory (byte) comparison is used, then almost the same result is obtained. Space character and EUC double-byte padding character are always sorted as zero in "bin" collation. No collations with such setting are currently configured in LDML (they are already available as built-in), but a similar one can be obtained using the maximum setting strength="quaternary" or strength="identical". * ai: accent insensitive; this means that '?' is sorted the same as 'A'. Due to particularities of the UCA based algorithms, an accent insensitive collation is also a case insensitive collation. In LDML, can be obtained using: strength="primary". * uca: this signals a UCA based collation; this is used only to differentiate such collations from similar built-in variants. All LDML collations are based on UCA, but in order to keep shorter names only two collations ( 'utf8_ko_cs_uca' , 'utf8_tr_cs_uca' ) have this description in their names, in order to differentiate them from 'utf8_ko_cs' and 'utf8_tr_cs' collations. * exp: this collations use a full-word matching/compare algorithm, contrary to the rest of collations which use character-by-character compare. This collation uses a more complex algorithm, with multiple passes which is much slower, but may prove useful for alphabetical sorts. In LDML, the :ref:`expansion` needs to be explicit by adding CUBRIDExpansions="use". * ab: accent backwards; it is particularity of French-Canadian sorting, where level 2 of UCA (used to store accents weights) is compared from end of string towards the beginning. This collation setting can be used only when :ref`expansion` setting is also activated. The "backwards" setting allows for the following sorting: * Normal Accent Ordering: cote < cot¨¦ < c?te < c?t¨¦ * Backward Accent Ordering: cote < c?te < cot¨¦ < c?t¨¦ * cbm: contraction boundary match; it is a particularity of collations with :ref:`expansion` and :ref:`contraction` and refers to how it behaves at string matching when a :ref:`contraction` is found. Suppose the collation has defined the :ref:`contraction` "ch"; then normally, the pattern "bac" will not match the string"bachxxx" But when the collation is configured to allow "matching the characters starting a contraction", the above matching will return a positive. Only one collation is configured in this manner - 'utf8_ja_exp_cbm' - Japanese sorting requires a lot of contractions. The collation names are not dynamically generated. They are user defined (configured in LDML), and should reflect the settings of the collation. The name of collation influences the internal numeric id of the collation. For instance, in CUBRID only 256 collations are allowed, and the numeric IDs are assigned as: * 0 -31: built-in collations (for these collations the name and id are hard-coded) * 32 - 46: LDML collations having "gen" as "language" part * 47 - 255: the rest of LDML collations If you want to include all locales into the database which CUBRID provide, first, copy cubrid_locales.all.txt of $CUBRID/conf directory into cubrid_locales.txt and next, run make_locale script(in extension, Linux is .sh, Windows is .bat). For more details on make_locale script, see :ref:`locale-compilation`. If you want to include the newly added locale information into the existing database, run "cubrid synccolldb ". For more information, see :ref:`synccolldb`. If you include all locales defined in LDML files, CUBRID has the following collations. .. _cubrid-all-collation: CUBRID Collation ^^^^^^^^^^^^^^^^ +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | Collation | Locale for casing | Character range | +===================+=======================================================================+===========================================+ | iso88591_bin | en_US - English | ASCII + ISO88591 (C0-FE, except D7, F7) | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | iso88591_en_cs | en_US - English | ASCII + ISO88591 (C0-FE, except D7, F7) | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | iso88591_en_ci | en_US - English | ASCII + ISO88591 (C0-FE, except D7, F7) | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_bin | en_US - English | ASCII | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | euckr_bin | ko_KR - Korean, same as en_US - English | ASCII | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_en_cs | en_US - English | ASCII | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_en_ci | en_US - English | ASCII | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_tr_cs | tr_TR - Turkish | Turkish alphabet | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_ko_cs | ko_KR - Korean, same as en_US - English | ASCII | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_gen | de_DE - German, generic Unicode casing customized with German rules | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_gen_ai_ci | de_DE - German, generic Unicode casing customized with German rules | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_gen_ci | de_DE - German, generic Unicode casing customized with German rules | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_de_exp_ai_ci | de_DE - German, generic Unicode casing customized with German rules | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_de_exp | de_DE - German, generic Unicode casing customized with German rules | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_ro_cs | ro_RO - Romanian, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_es_cs | es_ES - Spanish, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_fr_exp_ab | fr_FR - French, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_ja_exp | ja_JP - Japanese, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_ja_exp_cbm | ja_JP - Japanese, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_km_exp | km_KH - Cambodian, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_ko_cs_uca | ko_KR - Korean, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_tr_cs_uca | tr_TR - Turkish, generic Unicode casing customized with Turkish rules | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | utf8_vi_cs | vi_VN - Vietnamese, same as generic Unicode casing | All Unicode codepoints in range 0000-FFFF | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ | binary | none (invariant to casing operations) | any byte value (zero is nul-terminator) | +-------------------+-----------------------------------------------------------------------+-------------------------------------------+ The Turkish casing rules changes the casing for character i,I,?,?. The German casing rules changes the casing for ?. On the above collations, 9 collations like iso88591_bin, iso88591_en_cs, iso88591_en_ci, utf8_bin, euckr_bin, utf8_en_cs, utf8_en_ci, utf8_tr_cs and utf8_ko_cs, are built in the CUBRID before running make_locale script. If a collation is included in more than one locale (.ldml) file, the locale for casing (default locale of collation) is the locale in which it is first included. The order of loading is the locales order from $CUBRID/conf/cubrid_locales.txt. The above locale casing for collations utf8_gen, utf8_gen_ci, utf8_gen_ai_ci, assumes the default order (alphabetical) in cubrid_locales.txt, so the default locale for all generic LDML collations is de_DE (German). Files For Locale Setting ------------------------ CUBRID uses following directories and files to set the locales. * **$CUBRID/conf/cubrid_locales.txt** file: A configuration file containing the list of locales to be supported * **$CUBRID/conf/cubrid_locales.all.txt** file: A configuration file template with the same structure as **cubrid_locales.txt**. Contains the entire list of all the locales that the current version of CUBRID is capable of supporting without any efforts from the end user's side. * **$CUBRID/locales/data** directory: This contains files required to generate locale data. * **$CUBRID/locales/loclib** directory: contains a C header file, **locale_lib_common.h** and OS dependent makefile which are used in the process of creating / generating locales shared libraries. * **$CUBRID/locales/data/ducet.txt** file: Text file containing default universal collation information (codepoints, contractions and expansions, to be more specific) and their weights, as standardized by The Unicode Consortium, which is the starting point for the creation of collations. For more information, see http://unicode.org/reports/tr10/#Default_Unicode_Collation_Element_Table . * **$CUBRID/locales/data/unicodedata.txt** file: Text file containing information about each Unicode codepoint regarding casing, decomposition, normalization etc. CUBRID uses this to determine casing. For more information, see https://docs.python.org/3/library/unicodedata.html . * **$CUBRID/locales/data/ldml** directory: common_collations.xml and XML files, name with the convention cubrid_<*locale_name*>.xml. common_collations.xml file contains shared collation information in all locale files, and each cubrid_<*locale_name*>.xml file contains a locale information for the supported language. * **$CUBRID/locales/data/codepages** directory: contains codepage console conversion for single byte codepages(8859-1.txt , 8859-15.txt, 8859-9.txt) and codepage console conversion for double byte codepages(CP1258.txt , CP923.txt, CP936.txt , CP949.txt). * **$CUBRID/bin/make_locale.sh** file or **%CUBRID%\\bin\\make_locale.bat** file: A script file used to generate shared libraries for locale data * **$CUBRID/lib** directory: Shared libraries for generated locales will be stored here. .. _locale-setting: Locale Setting ============== When you want to use a charset and collation of a specific language, the charset should be identical with a database which will be created newly. Supported CUBRID charsets are ISO-8859-1, EUC-KR and UTF-8 and the charset to be used is specified when creating a database. For example, when you created a database with a locale ko_KR.utf8, you can use collations starting with "utf8\_" like utf8_ja_exp. However, if you set the locale as ko_KR.euckr, you cannot use all collations which are related with other charset(see :ref:`cubrid-all-collation`). The following is an example which used utf8_ja_exp after creating a database with en_US.utf8. #. cd $CUBRID/conf #. cp cubrid_locales.all.txt cubrid_locales.txt #. make_locale.sh -t64 # 64 bit locale library creation #. cubrid createdb testdb en_US.utf8 #. cubrid server start testdb #. csql -u dba testdb #. run below query on csql .. code-block:: sql SET NAMES utf8; CREATE TABLE t1 (i1 INT , s1 VARCHAR(20) COLLATE utf8_ja_exp, a INT, b VARCHAR(20) COLLATE utf8_ja_exp); INSERT INTO t1 VALUES (1, '¤¤¥¤»ù±P',1,'¤¤¥¤ ÀO'); For more details, see the following. .. _locale-selection: Step 1: Selecting a Locale -------------------------- Configure locales to use on **$CUBRID/conf/cubrid_locales.txt**. You can select all or some of locales which are supported. CUBRID supports locales as follows: en_US, de_DE, es_ES, fr_FR, it_IT, ja_JP, km_KH, ko_KR, tr_TR, vi_VN, zh_CN, ro_RO. The language and country for each locale are shown in the following table. +-----------------+------------------------+ | Locale Name | Language - Country | +-----------------+------------------------+ | en_US | English - U.S.A. | +-----------------+------------------------+ | de_DE | German - Germany | +-----------------+------------------------+ | es_ES | Spanish - Spain | +-----------------+------------------------+ | fr_FR | French - France | +-----------------+------------------------+ | it_IT | Italian - Italy | +-----------------+------------------------+ | ja_JP | Japanese - Japan | +-----------------+------------------------+ | km_KH | Khmer - Cambodia | +-----------------+------------------------+ | ko_KR | Korean - Korea | +-----------------+------------------------+ | tr_TR | Turkish - Turkey | +-----------------+------------------------+ | vi_VN | Vietnamese - Vietnam | +-----------------+------------------------+ | zh_CN | Chinese - China | +-----------------+------------------------+ | ro_RO | Romanian - Romania | +-----------------+------------------------+ .. note:: The LDML files for the supported locales are named cubrid_<*locale_name*>.xml and they can be found in the **$CUBRID/locales/data/ldml** directory. If only a subset of these locales are to be supported by CUBRID, one must make sure their corresponding LDML files are present in the **$CUBRID/locales/data/ldml** folder. A locale cannot be used by CUBRID, unless it has an entry in **cubrid_locales.txt file** and it has a corresponding cubrid_<*locale_name*>.xml. Locale libraries are generated according to the contents of **$CUBRID/conf/cubrid_locales.txt** configuration file. This file contains the language codes of the wanted locales (all user defined locales are generated with UTF-8 charset). Also, in this file can be configured the file paths for each locale LDML file and libraries can be optionally configured. :: ko_KR /home/CUBRID/locales/data/ldml/cubrid_ko_KR.xml /home/CUBRID/lib/libcubrid_ko_KR.so By default, the LDML files are found in **$CUBRID/locales/data/ldml** and the locale libraries in **$CUBRID/lib**; the filenames for LDML are formatted like: cubrid_<*lang_name*>.ldml. The filenames for libraries: libcubrid_<*lang_name*>.dll (.so for Linux). .. _locale-compilation: Step 2: Compiling Locale ------------------------ Once the requirements described above are met, the locales can be compiled. Regarding the embedded locales in CUBRID, they can be used without compiling user locale library, so they can be used by skipping the step 2. But there are differences between the embedded locale and the library locale. Regarding this, see :ref:`Built-in Locale and Library Locale `. To compile the locale libraries, one must use the **make_locale** (**.bat** for Windows and **.sh** for Linux) utility script from command console. The file is delivered in **$CUBRID/bin** folder so it should be resolved by **$PATH** environment variable. Here **$CUBRID, $PATH** are the environment variables of Linux, **%CUBRID%**, **%PATH%** are the environment variables of Windows. .. note:: To run a **make_locale** script in Windows, it requires Visual C++ 2005, 2008 or 2010. Usage can be displayed by running **make_locale.sh -h**. (**make_locale /h** in Windows.) :: make_locale.sh [options] [locale] options ::= [-t 32|64 ] [-m debug|release] locale ::= [de_DE|es_ES|fr_FR|it_IT|ja_JP|km_KH|ko_KR|tr_TR|vi_VN|zh_CN|ro_RO] * *options* * **-t**: Selects 32bit or 64bit (default value: **64**). * **-m**: Selects release or debug. In general, release is selected (default value: release). The debug mode is provided for developers who would like to write the locale library themselves. Selects release or debug. In general, release is selected (default value: release). The debug mode is provided for developers who would like to write the locale library themselves. * *locale*: The locale name of the library to build. If *locale* is not specified, the build includes data from all configured locales. In this case, library file is stored in **$CUBRID/lib** directory with the name of **libcubrid_all_locales.so** (**.dll** for Windows). To create user defined locale shared libraries, two choices are available: * Creating a single lib with all locales to be supported. :: make_locale.sh -t64 # Build and pack all locales (64/release) * Creating one lib for each locale to be supported. :: make_locale.sh -t 64 -m release ko_KR The first choice is recommended. In this scenario, some data may be shared among locales. If you choose the first one, a lib supporting all locales has less than 15 MB; in the second one, consider for each locale library from 1 MB to more than 5 MB. Also the first one is recommended because it has no runtime overhead during restarting the servers when you choose the second one. .. warning:: **Limitations and Rules** * Do not change the contents of **$CUBRID/conf/cubrid_locales.txt** after locales generation; once the locales libraries are generated, the contents of **$CUBRID/conf/cubrid_locales.txt** should not be changed (order of languages within the file must also be preserved). During locale compiling, the generic collation uses the first one as default locale; changing the order may cause different results with casing for such collation (utf8_gen_*). * Do not change the contents for **$CUBRID/locales/data/*.txt** files. .. note:: **Procedure of Executing make_locale.sh(.bat) Script** The processing in **make_locale.sh(.bat)** script #. Reads the **.ldml** file corresponding to a language, along with some other installed common data files like **$CUBRID/locales/data/ducet.txt**, **$CUBRID/locales/data/unicodedata.txt**, and **$CUBRID/locales/data/codepages/*.txt** #. After processing of raw data, it writes in a temporary **$CUBRID/locales/loclib/locale.c** file C constants values and arrays consisting of locales data. #. The temporary file **locale.c** is passed to the platform compiler to build a **.dll/.so** file. This step assumes that the machines has an installed C/C++ compiler and linker. Currently, only the MS Visual Studio for Windows and gcc for Linux compilers are supported. #. Temporary files are removed. Step 3: Setting CUBRID to Use a Specific Locale ----------------------------------------------- Only one locale can be selected as the default locale when you create DB. In addition to the possibility of specifying a default locale, one can override the default calendar settings with the calendar settings from another locale, using the **intl_date_lang** system parameter. * The locale will be in the format: <*locale_name*>.[**utf8** | **iso**] (e.g. tr_TR.utf8, en_EN.ISO, ko_KR.utf8) * **intl_date_lang**: <*locale_name*>. The possible values for <*locale_name*> are listed on :ref:`locale-selection`. .. note:: **Setting the Month/Day in Characters, AM/PM, and Number Format** For the function that inputs and outputs the day/time, you can set the month/day in characters, AM/PM, and number format by the locale in the **intl_date_lang** system parameter. Also for the function that converts a string to numbers or the numbers to a string, you can set the string format by the locale in **intl_number_lang** system parameter. .. _built-in-locale-limit: Built-in Locale and Library Locale ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Regarding the embedded locales in CUBRID, they can be used without compiling user locale library, so they can be used by skipping the step 2. But there are two differences between the embedded locale and the library locale. * Embedded(built-in) locale(and collation) are not aware of Unicode data For instance, casing (lower, upper) of (?, ¨¢) is not available in embedded locales. The LDML locales provide data for Unicode codepoints up to 65535. * Also, the embedded collations deals only with ASCII range, or in case of 'utf8_tr_cs' - only ASCII and letters from Turkish alphabet. Embedded UTF-8 locales are not Unicode compatible, while compiled (LDML) locales are. Currently, the built-in locales which can be set during creating DB are as follows: * en_US.iso88591 * en_US.utf8 * ko_KR.utf8 * ko_KR.euckr * ko_KR.iso88591: Will have Romanized Korean names for month, day names. * tr_TR.utf8 * tr_TR.iso88591: Will have Romanized Turkish names for month, day names. The order stated above is important; if no charset is defined while creating DB, the charset is the charset of the locale shown first. For example, if the locale is set as ko_KR(e.g. cubrid createdb testdb ko_KR), the charset is specified as ko_KR.utf8, the first locale among the ko_KR in the above list. Locales of the other languages except the built-in locales should end with **.utf8**. For example, specify the locale as de_DE.utf8 for German. The names of month and day for ko_KR.iso88591 and tr_TR.iso88591 should be Romanized. For example, "???" for Korean (Sunday in English) is Romanized to "Iryoil". Providing ISO-8859-1 characters only is required. For more information, see :ref:`romanized-names`. .. _romanized-names: The Month/Day in Korean and Turkish Characters for ISO-8859-1 Charset ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In Korean or Turkish which have charset UTF-8 or in Korean which have charset EUC-KR, the month/day in characters and AM/PM are encoded according to the country. However, for ISO-8859-1 charset, if the month/day in characters and AM/PM in Korean or Turkish is used as its original encoding, an unexpected behavior may occur in the server process because of its complex expression. Therefore, the name should be Romanized. The default charset of CUBRID is ISO-8859-1 and the charset can be used for Korean and Turkish. The Romanized output format is as follows: **Day in Characters** +---------------------------------------+----------------------------------+----------------------------------+ | Day in Characters Long/Short Format | Long/Short Romanized Korean | Long/Short Romanized Turkish | +=======================================+==================================+==================================+ | Sunday / Sun | Iryoil / Il | Pazar / Pz | +---------------------------------------+----------------------------------+----------------------------------+ | Monday / Mon | Woryoil / Wol | Pazartesi / Pt | +---------------------------------------+----------------------------------+----------------------------------+ | Tuesday / Tue | Hwayoil / Hwa | Sali / Sa | +---------------------------------------+----------------------------------+----------------------------------+ | Wednesday / Wed | Suyoil / Su | Carsamba / Ca | +---------------------------------------+----------------------------------+----------------------------------+ | Thursday / Thu | Mogyoil / Mok | Persembe / Pe | +---------------------------------------+----------------------------------+----------------------------------+ | Friday / Fri | Geumyoil / Geum | Cuma / Cu | +---------------------------------------+----------------------------------+----------------------------------+ | Saturday / Sat | Toyoil / To | Cumartesi / Ct | +---------------------------------------+----------------------------------+----------------------------------+ **Month in Characters** +---------------------------------------+----------------------------------+----------------------------------+ | Month in Characters Long/Short Format | Long/Short Romanized Korean | Long/Short Romanized Turkish | | | (Not Classified) | | +=======================================+==================================+==================================+ | January / Jan | 1wol | Ocak / Ock | +---------------------------------------+----------------------------------+----------------------------------+ | February / Feb | 2wol | Subat / Sbt | +---------------------------------------+----------------------------------+----------------------------------+ | March / Mar | 3wol | Mart / Mrt | +---------------------------------------+----------------------------------+----------------------------------+ | April / Apr | 4wol | Nisan / Nsn | +---------------------------------------+----------------------------------+----------------------------------+ | May / May | 5wol | Mayis / Mys | +---------------------------------------+----------------------------------+----------------------------------+ | June / Jun | 6wol | Haziran / Hzr | +---------------------------------------+----------------------------------+----------------------------------+ | July / Jul | 7wol | Temmuz / Tmz | +---------------------------------------+----------------------------------+----------------------------------+ | August / Aug | 8wol | Agustos / Ags | +---------------------------------------+----------------------------------+----------------------------------+ | September / Sep | 9wol | Eylul / Eyl | +---------------------------------------+----------------------------------+----------------------------------+ | October / Oct | 10wol | Ekim / Ekm | +---------------------------------------+----------------------------------+----------------------------------+ | November / Nov | 11wol | Kasim / Ksm | +---------------------------------------+----------------------------------+----------------------------------+ | December / Dec | 12wol | Aralik / Arl | +---------------------------------------+----------------------------------+----------------------------------+ **AM/PM in Characters** +---------------------------------------+----------------------------------+----------------------------------+ | AM/PM in Characters Long/Short Format | Romanized in Korean | Romanized in Turkish | +=======================================+==================================+==================================+ | AM | ojeon | AM | +---------------------------------------+----------------------------------+----------------------------------+ | PM | ohu | PM | +---------------------------------------+----------------------------------+----------------------------------+ Step 4: Creating a Database with the Selected Locale Setting ------------------------------------------------------------ When issuing the command "**cubrid createdb** <*db_name*> <*locale_name.charset*>", a database will be created using the settings in the variables described above. Once the database is created a locale setting which was given to the database cannot be changed. The charset and locale name are stored in "**db_root**" system catalog table. .. _dumplocale: Step 5 (optional): Manually Verifying the Locale File ----------------------------------------------------- The contents of locales libraries may be displayed in human readable form using the **dumplocale** CUBRID utility. Execute **cubrid dumplocale -h** to output the usage. The used syntax is as follows. :: cubrid dumplocale [options] [language-string] options ::= -i|--input-file -d|--calendar -n|--numeric {-a |--alphabet=}{l|lower|u|upper|both} -c|--codepoint-order -w|--weight-order {-s|--start-value} {-e|--end-value} -k -z language-string ::= de_DE|es_ES|fr_FR|it_IT|ja_JP|km_KH|ko_KR|tr_TR|vi_VN|zh_CN|ro_RO * **dumplocale**: A command which dumps the contents of locale shared library previously generated using LDML input file. * *language-string*: One of de_DE, es_ES, fr_FR, it_IT, ja_JP, km_KH, ko_KR, tr_TR, vi_VN, zh_CN and ro_RO. Configures the locale language to dump the locale shared library. If it's not set, all languages which are configured on **cubrid_locales.txt** are given. The following are [options] for **cubrid dumplocale**. .. program:: dumplocale .. option:: -i, --input-file=FILE The name of the locale shared library file (< *shared_lib*>) created previously. It includes the directory path. .. option:: -d, --calendar Dumps the calendar and date/time data. Default value: No .. option:: -n, --numeric Dumps the number data. Default value: No .. option:: -a, --alphabet=l|lower|u|upper|both Dumps the alphabet and case data. Default value: No .. option:: --identifier-alphabet=l|lower|u|upper Dumps the alphabet and case data for the identifier. Default value: No .. option:: -c, --codepoint-order Dumps the collation data sorted by the codepoint value. Default value: No (displayed data: cp, char, weight, next-cp, char and weight) .. option:: -w, --weight-order Dumps the collation data sorted by the weight value. Default value: No (displayed data: weight, cp, char) .. option:: -s, --start-value=CODEPOINT Specifies the dump scope. Starting codepoint for **-a, \-\-identifier-alphabet, -c, -w** options. Default value: 0 .. option:: -e, --end-value=CODEPOINT Specifies the dump scope. Ending codepoint for **-a, \-\-identifier-alphabet, -c, -w** options. Default value: Max value read from the locale shared library. .. option:: -k, --console-conversion Dumps the data of console conversion. Default value: No .. option:: -z, --normalization Dumps the normalization data. Default value: No The following example shows how to dump the calendar, number formatting, alphabet and case data, alphabet and case data for the identifier, collation sorting based on the codepoint order, collation sorting based on the weight, and the data in ko_KR locale into ko_KR_dump.txt by normalizing: :: % cubrid dumplocale -d -n -a both -c -w -z ko_KR > ko_KR_dump.txt It is highly recommended to redirect the console output to a file, as it can be very big data, and seeking information could prove to be difficult. Step 6: Starting CUBRID-Related Processes ----------------------------------------- All CUBRID-related processes should be started in an identical environmental setting. The CUBRID server, the broker, CAS, and CSQL should use the locale binary file of an identical version. Also CUBRID HA should use the same setting. For example, in the CUBRID HA, master server, slave server and replica server should use the same environmental variable setting. There is no check on the compatibility of the locale used by server and CAS (client) process, so the user should make sure the LDML files used are the same. Locale library loading is one of the first steps in CUBRID start-up. Locale (collation) information is required for initializing databases structures (indexes depends on collation). This process is performed by each CUBRID process which requires locale information: server, CAS, CSQL, createdb, copydb, unload, load DB. The process of loading a locale library is as follows. * If no lib path is provided, CUBRID will try to load $CUBRID/lib/libcubrid_<*lang_name*>.so file; if this file is not found, then CUBRID assumes all locales are found in a single library: **$CUBRID/lib/libcubrid_all_locales.so**. * If suitable locale library cannot be found or any other error occurs during loading, the CUBRID process stops. * If collations between the database and the locale library are different, the CUBRID process cannot start. To include the newly changed collations of the locale library, firstly synchronize the database collation with the system collation by running **cubrid synccolldb** command. Next, update from the existing database to the wanted collations of schemas and data. For more details, see :ref:`synccolldb`. .. _synccolldb: Synchronization of Database Collations with System Collations ------------------------------------------------------------- CUBRID's normal operation requires that the system collation and the database collation must be the same. The system locale means that the locale which include built-in locales and library locales created through cubrid_locales.txt (see :ref:`locale-setting`), and it includes the system collation information. The database collation information is stored on the **_db_collation** system catalog table. **cubrid synccolldb** utility checks if the database collation is the same with the system collation, and synchronize into the system collation if they are different. However, note that this utility doesn't transform the data itself stored on the database. This utility can be used when the existing database collation should be changed after the system locale is changed. However, there are operations which the user have to do manually. The user should do this operations before the synchronization. These operations can be done by running CSQL with cubrid_synccolldb_<*database_name*>.sql file, which is created by **cubrid synccolldb -c**. * Change collation using ALTER TABLE .. MODIFY statement. * Remove any views, indexes, triggers or partitions containing the collation. Run synchronization with **cubrid synccolldb**. After then, do the following operations. * Recreate views, indexes, triggers, or partitions * Update application statements to use new collations This utility should work only in offline mode. **synccolldb** syntax is as follows. :: cubrid synccolldb [options] database_name * **cubrid**: An integrated utility for the CUBRID service and database management. * **synccolldb**: A command to synchronize collations of a database with collations from the system(according to contents of locales libraries and $CUBRID/conf/cubrid_locales.txt). * *database_name*: A database name to be synchronized with collations from the system. If [options] is omitted, **synccolldb** checks the collation differences between the system and the database, synchronize the database collation with the system collation, and create the cubrid_synccolldb_<*database_name*>.sql file including the queries of objects to be dropped before the synchronization. The following are [options] which are used on **cubrid synccolldb**. .. program:: synccolldb .. option:: -c, --check-only This option prints out the collation information which is different between the database collation and the system collation. .. option:: -f, --force-only This option doesn't ask when updating the database collation with the system collation. The following shows that how it works when the system collation and the database collation are different. Firstly, make locale library about ko_KR locale. :: $ echo ko_KR > $CUBRID/conf/cubrid_locales.txt $ make_locale.sh -t 64 Next, create the database. :: $ cubrid createdb --db-volume-size=20M --log-volume-size=20M xdb en_US Create a schema. At this time, specify the needed collation in each table. :: $ csql -S -u dba xdb -i in.sql .. code-block:: sql CREATE TABLE dept(depname STRING PRIMARY KEY) COLLATE utf8_ko_cs_uca; CREATE TABLE emp(eid INT PRIMARY KEY, depname STRING,address STRING) COLLATE utf8_ko_cs_uca; ALTER TABLE emp ADD CONSTRAINT FOREIGN KEY (depname) REFERENCES dept(depname); Change the locale setting of the system. If you do not any values on **cubrid_locales.txt**, the database consider that only built-in locales exist :: $ echo "" > $CUBRID/conf/cubrid_locales.txt Check the difference between system and database by running **cubrid synccolldb -c** command. :: $ cubrid synccolldb -c xdb ---------------------------------------- ---------------------------------------- Collation 'utf8_ko_cs_uca' (Id: 133) not found in database or changed in new system configuration. ---------------------------------------- ---------------------------------------- Collation 'utf8_gen_ci' (Id: 44) not found in database or changed in new system configuration. ---------------------------------------- ---------------------------------------- Collation 'utf8_gen_ai_ci' (Id: 37) not found in database or changed in new system configuration. ---------------------------------------- ---------------------------------------- Collation 'utf8_gen' (Id: 32) not found in database or changed in new system configuration. ---------------------------------------- ---------------------------------------- There are 4 collations in database which are not configured or are changed compared to system collations. Synchronization of system collation into database is required. Run 'cubrid synccolldb -f xdb' If the indexes exist, firstly you should remove the indexes, and change the collation of each table, then recreate the indexes directly. The process to remove indexes and change the collation of tables can be executed by using cubrid_synccolldb_xdb.sql file which was created by **synccolldb** command. On the below example, a foreign key is the index which you should recreate. :: $ cat cubrid_synccolldb_xdb.sql ALTER TABLE [dept] COLLATE utf8_bin; ALTER TABLE [emp] COLLATE utf8_bin; ALTER TABLE [emp] DROP FOREIGN KEY [fk_emp_depname]; ALTER TABLE [dept] MODIFY [depname] VARCHAR(1073741823) COLLATE utf8_bin; ALTER TABLE [emp] MODIFY [address] VARCHAR(1073741823) COLLATE utf8_bin; ALTER TABLE [emp] MODIFY [depname] VARCHAR(1073741823) COLLATE utf8_bin; $ csql -S -u dba -i cubrid_synccolldb_xdb.sql xdb Removing the obsolete collations by executing the above cubrid_synccolldb_xdb.sql script file must be performed before forcing the synchronization of system collations into database. Run **cubrid synccolldb** command. If the option is omitted, the message is shown to ask to run this command or not; if the **-f** option is given, the synchronization is run without checking message. :: $ cubrid synccolldb xdb Updating system collations may cause corruption of database. Continue (y/n) ? Contents of '_db_collation' system table was updated with new system collations. Recreate the dropped foreign key. :: $ csql -S -u dba xdb ALTER TABLE emp ADD CONSTRAINT FOREIGN KEY fk_emp_depname(depname) REFERENCES dept(depname); .. note:: In CUBRID, collations are identified by the ID number on the CUBRID server, and its range is from 0 to 255. LDML file is compiled with shared library, which offers the mapping information between the ID and the collation(name, attribute). * The system collation is the collation which is loaded from the locale library, by the CUBRID server and the CAS module. * The database collation is the collation which is stored into the **_db_collation** system table. .. _collation: Collation ========= A collation is an assembly of information which defines an order for characters and strings. One common type of collation is called alphabetization. If not explicitly set otherwise at column creation, the charset and collation of columns are charset and collation of table. The charset and collation are taken (in order in is found first) from the client. If the result of an expression is a character data type, gets the collation and charset by the collation inference with the operands of the expression. .. note:: \ In CUBRID, collations are supported for a number of languages, including European and Asian. In addition to the different alphabets, some of these languages may require the definition of expansions or contractions for some characters or character groups. Most of these aspects have been put together by the Unicode Consortium into The Unicode Standard (up to version 6.1.0 in 2012). Most of the information is stored in the DUCET file `http://www.unicode.org/Public/UCA/latest/allkeys.txt `_ which contains all characters required by most languages. Most of the codepoints represented in DUCET, are in range 0 - FFFF, but codepoints beyond this range are included. However, CUBRID will ignore the latest ones, and use only the codepoints in range 0 - FFFF (or a lower value, if configured). Each codepoint in DUCET has one or more 'collation elements' attached to it. A collation element is a set of four numeric values, representing weights for 4 levels of comparison. Weight values are in range 0 - FFFF. In DUCET, a character is represented on a single line, in the form: :: < codepoint_or_multiple_codepoints > ; [.W1.W2.W3.W4][....].... # < readable text explanation of the symbol/character > A Korean character kiyeok is represented as follows: :: 1100 ; [.313B.0020.0002.1100] # HANGUL CHOSEONG KIYEOK For example, 1100 is a codepoint, [.313B.0020.0002.1100] is one collation element, 313B is the weight of Level 1, 0020 is the weight of Level 2, 0002 is the weight of Level 3, and 1100 is the weight of Level 4. Expansion support, defined as a functional property, means supporting the interpretation of a composed character as a pair of the same characters which it's made of. A rather obvious example is interpreting the character ''?'' in the same way as the two character string ''ae''. This is an expansion. In DUCET, expansions are represented by using more than one collation element for a codepoint or contraction. By default, CUBRID has expansions disabled. Handling collations with expansions requires when comparing two strings several passes (up to the collation strength/level). .. _collation-charset-column: Charset and Collation of Column ------------------------------- Charset and Collation apply to string data types: **VARCHAR** (**STRING**), **CHAR** and **ENUM**. By default, all string data types inherit the default database collation and character set, but CUBRID supports two modifiers which affect collation and character set. Charset ^^^^^^^ Character set may be specified as character string literal or as non-quoted identifier. Supported character sets: * ISO-8859-1 * UTF-8 (with maximum 4 bytes per characters, which means it supports codepoints from 0 to 0x10FFFF) * EUC-KR (the support for this character set is only for backward compatibility reasons, its usage is not recommended) .. note:: Previous versions of CUBRID 9.0 supported EUC-KR characters when ISO-8859-1 charset (the single one available) was set. From CUBRID 9.0 Beta, this is no longer available. EUC-KR characters should be used only with EUC-KR charset. String Check ^^^^^^^^^^^^ By default, all input data is assumed to be in the server character specified when creating DB. This may be overridden by **SET NAMES** or charset introducer (or **COLLATE** string literal modifier) (For more information, see :ref:`collation-charset-string`. Invalid data may lead to undefined behavior or even crashes if string checking is disabled (by default is disabled). This can be enabled by **intl_check_input_string** system parameter. However, if you are sure that only valid data is input, you can obtain better performance by disabling string check. Only UTF-8 and EUC-KR text data is checked for valid encodings. Since ISO-8859-1 is single byte encoding and all byte values are valid, there is no checking on this charset. Charset Conversion ^^^^^^^^^^^^^^^^^^ When **collation** / **charset** modifiers or normal collation inference requires it, character conversion may occur. Conversions are not reversible. Generally, charset conversion is character transcoding (the bytes representing a character in one charset are replaced with other bytes representing the same character but in the destination charset). With any conversion, losses may occur. If a character from source charset cannot be encoded in destination charset, it is replaced with a '?' character. This also applies to conversions from binary charset to any other charset. The widest character support is with UTF-8 charset, and since it encodes Unicode, one expects that all character can be encoded. However, during conversion from ISO-8859-1 to UTF-8 some "losses" occur: bytes range 80-A0 are not valid ISO-8859-1 characters but may appear in strings. After conversion to UTF-8 these characters are replaced with '?'. Rules for conversion of values from one charset to another: +------------------------+-----------------------------------+-----------------------------------+-------------------------------+-------------------------------+ | Source \\ Destination | Binary | ISO-8859-1 | UTF-8 | EUC-KR | +========================+===================================+===================================+===============================+===============================+ | **Binary** | No change | No change | No change. | No change. | | | | The byte size unchanged. | Validation per character. | Validation per character. | | | | Character length unchanged. | Invalid char replace with '?' | Invalid char replace with '?' | +------------------------+-----------------------------------+-----------------------------------+-------------------------------+-------------------------------+ | **ISO-8859-1** | No change | No change | Byte conversion. | Byte conversion. | | | | | The byte size increases. | Byte size increase. | | | | | No loss of useful characters. | No loss of useful characters. | +------------------------+-----------------------------------+-----------------------------------+-------------------------------+-------------------------------+ | **UTF-8** | No change. | Byte conversion. | No change | Byte conversion. | | | The byte size unchanged. | Byte size may decrease. | | Byte size may decrease. | | | Character length increases. | Expect loss of characters. | | Expect loss of characters. | +------------------------+-----------------------------------+-----------------------------------+-------------------------------+-------------------------------+ | **EUC-KR** | No change. | Byte conversion. | Byte conversion. | No change | | | The byte size unchanged. | Byte size may decrease. | Byte size may increase. | | | | Character length increases | Expect loss of characters | No loss of useful characters. | | +------------------------+-----------------------------------+-----------------------------------+-------------------------------+-------------------------------+ .. note:: Previous versions of CUBRID 9.x didn't supported binary charset. The ISO-8859-1 charset had the role of existing binary charset. Conversions from UTF-8 and EUC-KR charsets to ISO-8859-1 were performed by reinterpreting the byte content of source, not by character translation. .. _collation-setting: Collation ^^^^^^^^^ Collation may be specified as character string literal or as non-quoted identifier. The following is a query(SELECT * FROM db_collation WHERE is_builtin='Yes') on the **db_collation** system table. :: coll_id coll_name charset_name is_builtin has_expansions contractions uca_strength ================================================================================================ 0 'iso88591_bin' 'iso88591' 'Yes' 'No' 0 'Not applicable' 1 'utf8_bin' 'utf8' 'Yes' 'No' 0 'Not applicable' 2 'iso88591_en_cs' 'iso88591' 'Yes' 'No' 0 'Not applicable' 3 'iso88591_en_ci' 'iso88591' 'Yes' 'No' 0 'Not applicable' 4 'utf8_en_cs' 'utf8' 'Yes' 'No' 0 'Not applicable' 5 'utf8_en_ci' 'utf8' 'Yes' 'No' 0 'Not applicable' 6 'utf8_tr_cs' 'utf8' 'Yes' 'No' 0 'Not applicable' 7 'utf8_ko_cs' 'utf8' 'Yes' 'No' 0 'Not applicable' 8 'euckr_bin' 'euckr' 'Yes' 'No' 0 'Not applicable' 9 'binary' 'binary' 'Yes' 'No' 0 'Not applicable' Built-in collations are available without requiring additional user locale libraries. Each **collation** has an associated **charset**. For this reason, it is not allowed to set incompatible pair to **character** set and **collation**. When **COLLATE** modifier is specified without **CHARSET** modifier, then the default charset of collation is set. When **CHARSET** modifier is specified without **COLLATE** modifier, then the default collation is set. The default collation for character sets are the bin collation: * ISO-8859-1: iso88591_bin * UTF-8: utf8_bin * EUC-KR: euckr_bin * Binary: binary Binary is the name of both the collation and its associated charset. For more information on how to determine the collation among the expression parameters (operands) with different collations (and charsets), see :ref:`determine-collation-columns`. .. _charset-collate-modifier: CHARSET and COLLATE modifier ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ CUBRID supports two modifiers which affect collation and character set without following the default database collation and character set. * **CHARACTER_SET** (alias **CHARSET**) changes the columns character set * **COLLATE** changes the collation :: ::= [] [] ::= {CHARACTER_SET | CHARSET} { | } ::= {COLLATE } { | } The following example shows how to set the charset of the **VARCHAR** type column to UTF-8 .. code-block:: sql CREATE TABLE t1 (s1 VARCHAR (100) CHARSET utf8); The following example shows how to change the name of column s1 to c1 and the type to CHAR(10) with the collation of utf8_en_cs (the charset is the default charset of the collation, UTF-8). .. code-block:: sql ALTER TABLE t1 CHANGE s1 c1 CHAR(10) COLLATE utf8_en_cs; The value of the c1 column is changed to the VARCHAR(5) type whose collation is iso88591_en_ci. It is performed by using the collation iso88591_en_ci for the type of column selected first or by using sorting. .. code-block:: sql SELECT CAST (c1 as VARCHAR(5) COLLATE 'iso88591_en_ci') FROM t1 ORDER BY 1; The following query (same sorting) is similar to the above but the output column result is the original value. .. code-block:: sql SELECT c1 FROM t1 ORDER BY CAST (c1 as VARCHAR(5) COLLATE iso88591_en_ci); .. _determine-collation-columns: How to Determine Collation among Columns with Different Collation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE utf8_en_cs, s2 STRING COLLATE utf8_tr_cs ); -- insert values into both columns SELECT s1, s2 FROM t WHERE s1 > s2; :: ERROR: '>' requires arguments with compatible collations. In the above example, column *s1* and column *s2* have different collations. Comparing *s1* with *s2* means comparing the strings to determine which column value is "larger" among the records on the table t. In this case, an error will occur because the comparison between the collation utf8_en_cs and the collation utf8_tr_cs cannot be done. The rules to determine the types of arguments for an expression are also applied to the rules to determine the collations. #. A common collation and a characterset are determined by considering all arguments of an expression. #. If an argument has a different collation(and a characterset) with a common collation(and a characterset) decided in No. 1., it is changed into the common collation(and a characterset). #. To change the collation, :func:`CAST` operator can be used. Collation coercibility is used to determine the result collation of comparison expression. It expresses how easily the collation can be converted to the collation of the opposite argument. High collation coercibility when comparing two operands of an expression means that the collation can be easily converted to the collation of the opposite argument. That is, an argument with high collation coercibility can be changed to the collation of an argument with lower collation coercibility. When an expression has various arguments with different collation, a common collation is computed based on each arguments collation and coercibility. The rules for collation inference are: #. Arguments with higher coercibility are coerced (or casted) to collation of arguments with lower coercibility. #. When arguments have different collation but same coercibility, the expression's collation cannot be resolved and an error is returned. However, when comparing two operands of which collation coercibility level is 11(session variable, host variable) and charset is the same, one of their collation is changed as non-bin collation if one of them is bin collation(utf8_bin, iso88591_bin, euckr_bin). See :ref:`Converting Collation of Session Variable and/or Host Variable `. Below table shows the collation coercibility about arguments of the expression .. _collation-coercibility: +------------------------+------------------------------------------------------------------------------------------------------+ | Collation Coercibility | Arguments of the Expression(Operands) | +========================+======================================================================================================+ | -1 | As an expression which has arguments with only host variables, this coercibility cannot be | | | determined before the execution step. | +------------------------+------------------------------------------------------------------------------------------------------+ | 0 | Operand having **COLLATE** modifier | +------------------------+------------------------------------------------------------------------------------------------------+ | 1 | **Columns** with non-binary and non-bin collation | +------------------------+------------------------------------------------------------------------------------------------------+ | 2 | **Columns** with binary collation and binary charset | +------------------------+------------------------------------------------------------------------------------------------------+ | 3 | **Columns** with bin collation (iso88591_bin, utf8_bin, euckr_bin) | +------------------------+------------------------------------------------------------------------------------------------------+ | 4 | **SELECT values**, **Expression** With non-binary and non-bin collation | +------------------------+------------------------------------------------------------------------------------------------------+ | 5 | **SELECT values**, **Expression** With binary collation and binary charset | +------------------------+------------------------------------------------------------------------------------------------------+ | 6 | **SELECT values**, **Expression** With bin collation (iso88591_bin, utf8_bin, euckr_bin) | +------------------------+------------------------------------------------------------------------------------------------------+ | 7 | **Special functions** (:func:`SYSTEM_USER`, :func:`DATABASE`, :func:`SCHEMA`, :func:`VERSION`) | +------------------------+------------------------------------------------------------------------------------------------------+ | 8 | **Constants(string literals)** With non-binary and non-bin collation | +------------------------+------------------------------------------------------------------------------------------------------+ | 9 | **Constants(string literals)** With binary collation and binary charset | +------------------------+------------------------------------------------------------------------------------------------------+ | 10 | **Constants(string literals)** With bin collation (iso88591_bin, utf8_bin, euckr_bin) | +------------------------+------------------------------------------------------------------------------------------------------+ | 11 | host variables, session variables | +------------------------+------------------------------------------------------------------------------------------------------+ .. note:: In 9.x versions, the binary collation was not available. The iso85891_bin collation had the role of existing binary collation. Since version 10.0, the coercibility of columns with iso88591_bin was demoted from 2 to 3, that of expressions with iso88591_bin from 5 to 6, and of constants with iso88591_bin from 9 to 10. Regarding an expression which has arguments with only host variables, (e.g. UPPER(?) as the below) this coercibility can be determined on the execution step. That is, the coercibility like this expression cannot be determined on the parsing step; therefore, COERCIBILITY function returns -1. .. code-block:: sql SET NAMES utf8 PREPARE st FROM 'SELECT COLLATION(UPPER(?)) col1, COERCIBILITY(UPPER(?)) col2'; EXECUTE st USING 'a', 'a'; :: col1 col2 =================================== 'utf8_bin' -1 For expressions having all arguments with coercibility 11 and with different collations, the common collation is resolved at run-time (this is an exception from the coercibility value-based rule for inference which would require to raise an error). .. code-block:: sql PREPARE st1 FROM 'SELECT INSERT(?,2,2,?)'; EXECUTE st1 USING _utf8'abcd', _binary'ef'; :: insert( ?:0 , 2, 2, ?:1 ) ====================== 'aefd' The following shows converting two parameters with different collation to one collation. * **Converting into the Wanted Collation** The **SELECT** statement, failing to execute in the above example, is successfully executed by specifying a collation on one column by using the **CAST** operator as shown in the following query; then the two operands have the same collation. .. code-block:: sql SELECT s1, s2 FROM t WHERE s1 > CAST (s2 AS STRING COLLATE utf8_en_cs); Also, by **CAST** s2 to bin collation, the collation coercibility of CAST (6) is higher then coercibility of s1 (1). .. code-block:: sql SELECT s1, s2 FROM t WHERE s1 > CAST (s2 AS STRING COLLATE utf8_bin); In the following query, the second operand "CAST (s2 AS STRING COLLATE utf8_tr_cs)" is a sub-expression. The sub-expression has higher coercibility than the column (s1) so "CAST (s2 AS STRING COLLATE utf8_tr_cs)" is converted to the collation of s1. .. code-block:: sql SELECT s1, s2 FROM t WHERE s1 > CAST (s2 AS STRING COLLATE utf8_tr_cs); Any expression has higher coercibility than any column. So "CONCAT (s2,'')" is converted to the collation of s1 in the following query and the query is successfully performed. .. code-block:: sql SELECT s1, s2 FROM t WHERE s1 > CONCAT (s2,''); * **Converting Collation of Constant and Column** In the following case, comparison is made by using the collation of s1. .. code-block:: sql SELECT s1, s2 FROM t WHERE s1 > 'abc'; * **When a Column is Created with Bin Collation** .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE utf8_en_cs, s2 STRING COLLATE utf8_bin ); SELECT s1, s2 FROM t WHERE s1 > s2; In this case, s2 column's coercibility is 6(bin collation) and s2 can be "fully convertible" to the collation of s1. utf8_en_cs is used. .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE utf8_en_cs, s2 STRING COLLATE iso88591_bin ); SELECT s1, s2 FROM t WHERE s1 > s2; In this case, utf8_en_cs is used as collation, too. However, some overhead occurs to convert the charset to UTF-8 since s2 is the ISO charset. In the following query, the charset is not converted (UTF-8 byte data in s2 is easily reinterpreted to the ISO-8859-1 charset) but character comparison is made by using the iso88591_en_cs collation. .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE iso88591_en_cs, s2 STRING COLLATE utf8_bin ); SELECT s1, s2 FROM t WHERE s1 > s2; * **Converting Collation of Sub-Expression and Column** .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE utf8_en_cs, s2 STRING COLLATE utf8_tr_cs ); SELECT s1, s2 FROM t WHERE s1 > s2 + 'abc'; In this case, the second operand is the expression, so the collation of s1 is used. In the following example, an error occurs. An error occurs because '+' operation is tried for s2 and s3 where the collation is different. .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE utf8_en_cs, s2 STRING COLLATE utf8_tr_cs, s3 STRING COLLATE utf8_en_ci ); SELECT s1, s2 FROM t WHERE s1 > s2 + s3; :: ERROR: '+' requires arguments with compatible collations. In the following example, the collation of s2 and s3 is utf8_tr_cs. Therefore, the collation of '+' expression is utf8_tr_cs, too. Expressions have higher coercibility than columns. Therefore, comparison operation is made by using the utf8_en_cs collation. .. code-block:: sql CREATE TABLE t ( s1 STRING COLLATE utf8_en_cs, s2 STRING COLLATE utf8_tr_cs, s3 STRING COLLATE utf8_tr_cs ); SELECT s1, s2 FROM t WHERE s1 > s2 + s3; * **Converting Collation of Number, Date** Number or date constant which is convertible into string during operation always coercible into the other string's collation. .. _comparison-between-session-and-or-host-variables: * **Converting Collation of Session Variable and/or Host Variable** When comparing the two operands of which collation coercibility level is 11(session variable, host variable) and charset is the same, one of their collation is changed as non-bin collation. .. code-block:: sql SET NAMES utf8; SET @v1='a'; PREPARE stmt FROM 'SELECT COERCIBILITY(?), COERCIBILITY(@v1), COLLATION(?), COLLATION(@v1), ? = @v1'; SET NAMES utf8 COLLATE utf8_en_ci; EXECUTE stmt USING 'A', 'A', 'A'; When comparing @v1 and 'A', @v1's collation will be changed as utf8_en_ci, non-bin collation; therefore, @v1's value and 'A' will be the same and the result of "? = @v1" will be 1 as below. :: coercibility( ?:0 ) coercibility(@v1) collation( ?:1 ) collation(@v1) ?:2 =@v1 =================================================================================================== 11 11 'utf8_en_ci' 'utf8_bin' 1 .. code-block:: sql SET NAMES utf8 COLLATE utf8_en_cs; EXECUTE stmt USING 'A', 'A', 'A'; When comparing @v1 and 'A', @v1's collation will be changed as utf8_en_cs, non-bin collation; therefore, @v1's value and 'A' will be different and "? = @v1"'s result will be 0 as below. :: coercibility( ?:0 ) coercibility(@v1) collation( ?:1 ) collation(@v1) ?:2 =@v1 =================================================================================================== 11 11 'utf8_en_cs' 'utf8_bin' 0 However, if collations of @v1 and 'A' are different as below and the two collations are different, an error occurs. .. code-block:: sql DEALLOCATE PREPARE stmt; SET NAMES utf8 COLLATE utf8_en_ci; SET @v1='a'; PREPARE stmt FROM 'SELECT COERCIBILITY(?), COERCIBILITY(@v1), COLLATION(?), COLLATION(@v1), ? = @v1'; SET NAMES utf8 COLLATE utf8_en_cs; EXECUTE stmt USING 'A', 'A', 'A'; :: ERROR: Context requires compatible collations. Charset and Collation of an ENUM type column ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Charset and Collation of an ENUM type column follow the locale specified when creating DB. For example, create the below table after creating DB with en_US.iso88591. .. code-block:: sql CREATE TABLE tbl (e ENUM (_utf8'a', _utf8'b')); a column 'e' of the above table has ISO88591 charset and iso88591_bin collation even if the charset of the element is defined as UTF8. If the user want to apply the other charset or collation, it should be specified to the column of the table. Below is an example to specify the collation about the column of the table. .. code-block:: sql CREATE TABLE t (e ENUM (_utf8'a', _utf8'b') COLLATE utf8_bin); CREATE TABLE t (e ENUM (_utf8'a', _utf8'b')) COLLATE utf8_bin; .. _collation-charset-table: Charset and Collation of Tables ------------------------------- The charset and the collation can be specified after the table creation syntax. :: CREATE TABLE table_name () [CHARSET charset_name] [COLLATE collation_name] If the charset and the collation of a column are omitted, the charset and the collation of a table is used. If the charset and the collation of a table are omitted, the charset and the collation of a system is used. The following shows how to specify the collation on the table. .. code-block:: sql CREATE TABLE tbl( i1 INTEGER, s STRING ) CHARSET utf8 COLLATE utf8_en_cs; If the charset of a column is specified and the collation of a table is specified, the collation of this column is specified as the default collation(_bin) about this column's charset. .. code-block:: sql CREATE TABLE tbl (col STRING CHARSET utf8) COLLATE utf8_en_ci; On the above query, the collation of the column col becomes utf8_bin, the default collation about this column. :: csql> ;sc tbl tbl COLLATE utf8_en_ci col CHARACTER VARYING(1073741823) COLLATE utf8_bin .. _collation-charset-string: Charset and Collation of String Literals ---------------------------------------- The charset and the collation of a string literal are determined based on the following priority. #. :ref:`charset-introducer` introducer or :ref:`COLLATE modifier ` of string literal #. The charset and the collation defined by the :ref:`set-names-stmt` #. System charset and collation(Default collation by the locale specified when creating DB) .. _set-names-stmt: SET NAMES Statement ^^^^^^^^^^^^^^^^^^^ The **SET NAMES** statement changes the default client charset and the collation. Therefore, all sentences in the client which has executed the statement have the specified charset and collation. The syntax is as follows. :: SET NAMES [ charset_name ] [ COLLATE collation_name] * *charset_name*: Valid charset name is iso88591, utf8, euckr and binary. * *collation_name*: Collation setting can be omitted and all available collations can be set. The collation should be compatible with the charset; otherwise, an error occurs. To find the available collation names, look up the **db_collation** catalog VIEW (see :ref:`collation-charset-column`). Specifying a collation with **SET NAMES** statement is the same as specifying a system parameter **intl_collation**. Therefore, the following two statements are the same behavior. .. code-block:: sql SET NAMES utf8; SET SYSTEM PARAMETERS 'intl_collation=utf8_bin'; The following example shows how to create the string literal with the default charset and collation. .. code-block:: sql SELECT 'a'; The following example shows how to create the string literal with the utf8 charset and utf8_bin collation(the default collation is the bin collation of the charset) .. code-block:: sql SET NAMES utf8; SELECT 'a'; .. _charset-introducer: Charset Introducer ^^^^^^^^^^^^^^^^^^ In front of the constant string, the charset introducer and the **COLLATE** modifier can be positioned. The charset introducer is the charset name starting with a underscore (_), coming before the constant string. The syntax to specify the **CHARSET** introducer and the **COLLATE** modifier for a string is as follows. :: [charset_introducer]'constant-string' [ COLLATE collation_name ] * *charset_introducer*: a charset name starting with an underscore (_), can be omitted. One of _utf8, _iso88591, _euckr and _binary can be entered. * *constant-string*: a constant string value. * *collation_name*: the name of a collation, which can be used in the system, can be omitted. The default charset and collation of the constant string is determined based on the current database connected (the **SET NAMES** statement executed last or the default value). * When the string charset introducer is specified and the **COLLATE** modifier is omitted, the collation is: * if the charset introducer is the same as client charset (from a previous SET NAMES), then the client collation is applied. * if the charset introducer does not match the client charset, then the bin collation(one of euckr_bin, iso88591_bin and utf8_bin) corresponding to charset introducer is applied. * When the charset introducer is omitted and the **COLLATE** modifier is specified, the character is determined based on collation. The following example shows how to specify the charset introducer and the **COLLATE** modifier. .. code-block:: sql SELECT 'cubrid'; SELECT _utf8'cubrid'; SELECT _utf8'cubrid' COLLATE utf8_en_cs; The following example shows how to create the string literal with utf8 charset and utf8_en_cs collation. The **COLLATE** modifier of **SELECT** statement overrides the collation specified by **SET NAMES** syntax. .. code-block:: sql SET NAMES utf8 COLLATE utf8_en_ci; SELECT 'a' COLLATE utf8_en_cs; Charset and Collation of Expressions ------------------------------------ The charset and collation of expression's result are inferred from charset and collation of arguments in the expression. Collation inference in CUBRID is based on coercibility. For more information, see :ref:`determine-collation-columns`. All string matching function(LIKE, REPLACE, INSTR, POSITION, LOCATE, SUBSTRING_INDEX, FIND_IN_SET, etc) and comparison operators(<, >, =, etc) take collation into account. Charset and Collation of System Data ------------------------------------ The system charset is taken from the locale specified when creating DB. The system collation is always the bin collation (<*charset*>_bin) of system charset. CUBRID supports three charset(iso88591, euckr, utf8), and accordingly three system collations(iso88591_bin, euckr_bin, utf8_bin). Impact of Charset Specified When Creating DB -------------------------------------------- The locale specified when creating DB affects the following. * Character supported in identifiers and casing rules (called "alphabet") * Default locale for date - string conversion functions * Default locale for number - string conversion functions * Console conversion in CSQL .. _casing-and-identifiers: Casing and identifiers ^^^^^^^^^^^^^^^^^^^^^^ In CUBRID, identifiers are cases insensitive. Tables, columns, session variables, triggers, stored procedures are stored in lower case. Authentication identifiers (user and group names) are stored in upper case. The ISO-8859-1 charset contains only 255 characters, so the primitives are able to use built-in data. Also the EUC-KR charset, from which only the ASCII compatible characters are considered for casing (and are handled in the code), is built-in. The UTF-8 charset is a special case: There are built-in variants of UTF-8 locales (like en_US.utf8, tr_TR.utf8 and ko_KR.utf8) and LDML locales. The built-in variant implement only the characters specific to the locale (ASCII characters for en_US.utf8 and ko_KR.utf8, ASCII + Turkish glyphs [#f1]_ for tr_TR.utf8). This means that while all UTF-8 characters encoded on maximum 4 bytes are still supported and accepted as identifiers, most of them are not handled as letters, and treated as any normal Unicode character by casing primitives. For instance, character "?" (Unicode codepoint 00C8) is allowed, but an identifier containing it will not be normalized to "¨¨" (lower case). .. code-block:: sql CREATE TABLE ?ABC; Therefore, after running above query, it will have a table name with "?abc" into the system table, **_db_class**. Using a LDML locale (built-in variants can also be overridden with a LDML variant), extends the supported Unicode characters up to codepoint FFFF. For instance, if the locale is set by es_ES.utf8 when creating DB and the corresponding locale library is loaded, the previous statement will create a table with the name "¨¨abc". As previously mentioned, a set of casing rules and supported characters (letters) forms an "alphabet" in CUBRID (this is actually a tag in LDML). Some locales, like tr_TR and de_DE have specific casing rules: - in Turkish: lower('I')='?' (dot-less lower i); upper ('i') = '?' (capital I with dot). - in German: upper ('?')='SS' (two capital S letters). Because of this, such locales have two sets of alphabets: one which applies to system data (identifiers) and one which applies to user data. The alphabet applying to user data include the special rules, while the system (identifiers) alphabet do not, thus making the system alphabets compatible between locales. This is required to avoid issues with identifiers (like in Turkish, where casing of the group name "public" results in errors -> "PUBL?C" != "PUBLIC"). It also provides a compatibility between databases with different locales (should be able to export - import schema and data). String literal input and output ------------------------------- String literals data may be entered to CUBRID by various ways: * API interface (CCI) * language dependent interface - JDBC, Perl driver, etc. * CSQL - command line from console or input file When receiving character data through drivers, CUBRID cannot be aware of the charset of those strings. All text data contained between quotes (string literals) are handled by CUBRID as raw bytes; the charset meta-information must be provided by client. CUBRID provides a way for the client to instruct it about which type of encoding is using for its character data. This is done with the SET NAMES statement or with charset introducer. Text Conversion for CSQL ^^^^^^^^^^^^^^^^^^^^^^^^ Text console conversion works in CSQL console interface. Most locales have associated character set (or codepage in Windows) which make it easy to write non-ASCII characters from console. For example in LDML for tr_TR.utf8 locale, there is a line: :: If the user set its console in one of the above settings (chcp 28599 in Windows, or export LANG=tr_TR.iso88599 in Linux), CUBRID assumes all input is encoded in ISO-8859-9 charset, and converts all data to UTF-8. Also when printing results, CUBRID performs the reverse conversion (from UTF-8 to ISO-8859-9). In Linux, to prevent this transform, using UTF-8(ex: export LANG=tr_TR.utf8) directly is recommended. The setting is optional in the sense that the XML tag is not required in LDML locale file. For example, the locale km_KH.utf8 does not have an associated codepage. **Example for configuring French language and inputting French characters** Enable fr_FR in cubrid_locales.txt, compile the locales(see :ref:`locale-setting`) and set fr_FR.utf8 when you create DB. In Linux: * Set console to receive UTF-8; set LANG=fr_FR.utf8 or en_US.utf8 (any locale with UTF-8). This setting will allow to input any UTF-8 character (not only French specific) * or, set console to receive ISO-8859-15; set LANG=fr_FR.iso885915; in LDML tag, set linux_charset="iso885915". This will receive only ISO-8859-15 characters which will be converted by CSQL to UTF-8 encoding. In Windows: * Set windows codepage to 28605 (chcp 28605 in a command prompt); in LDML tag, set windows_codepage="28605". Codepage 28605 is the corresponding for ISO-8859-15 charset. **Example for configuring Romanian and inputting Romanian characters** Enable ro_RO in cubrid_locales.txt, compile the locales(see :ref:`locale-setting`) and set ro_RO.utf8 when you create DB. In Linux: * Set console to receive UTF-8; set LANG=ro_RO.utf8 or en_US.utf8 (any locale with UTF-8). This setting will allow to input any UTF-8 character (not only Romanian specific) * or, set console to receive ISO-8859-2; set LANG=ro_RO.iso88592; in LDML tag, set linux_charset="iso88592". This will receive only ISO-8859-15 characters which will be converted by CSQL to UTF-8 encoding. In Windows: * Set windows codepage to 1250 (chcp 1250 in a command prompt); in LDML tag, set windows_codepage="1250". Codepage 1250 is the corresponding for ISO-8859-2 charset. Codepage 1250 contains characters specific to some Central and Eastern European languages, including Romanian. Please note that characters outside codepage 1250 will not be properly displayed. To use special characters which exist on Romanian alphabet(e.g. "S" and "T" with cedilla bellow), the Romanian legacy keyboard setting of "Control Panel" on Windows is required. * ISO8859-2 contains some characters which codepage 1250 does not have, so you cannot input or output all characters of ISO8859-2 with CSQL. At input, the console conversion process takes all input (including statements) and performs the conversion (only if it is required - if it contains characters that needs conversion). At output (printing results, error messages), CSQL is more selective and does not convert all texts. For instance, printing of numeric values is not filtered through console conversion (since number text contains only ASCII characters). .. _unicode-normalization: Unicode Normalization ^^^^^^^^^^^^^^^^^^^^^ Glyphs [#f1]_ can be written in various forms using Unicode characters/codepoints. Most known are the decomposed and composed forms. For instance, the glyph '?' is written in composed form with a single codepoint: 00C4, in UTF-8 these has two bytes: C3 84. In (fully) decomposed form, it written with two codepoints: 0041 ('A') and 0308 (COMBINING DIAERESIS), and in UTF-8 is encode using 3 bytes: 41 CC 88. Most text editors are able to handle both forms, so both encodings will appear as the same glyph: '?'. Internally, CUBRID "knows" to work only with "fully composed" text. For clients working with "fully decomposed" text, CUBRID can be configured to convert such text to "fully composed" and serve them back as "fully decomposed". Normalization is not a locale specific feature, it does not depend on locale. **unicode_input_normalization** system parameter controls the composition at system level. For more details, see :ref:`unicode_input_normalization `. The main use case is with both enabled (**unicode_input_normalization**, **unicode_output_normalization**): this ensures that a string from a client knowing only decomposed Unicode is still properly handled by CUBRID. A second use case is with **unicode_input_normalization** = yes and **unicode_output_normalization** = no, for a client able to handle both types of Unicode writing. .. _collation-cont-exp: Contraction and Expansion of Collation -------------------------------------- CUBRID supports contraction and expansion for collation. Contraction and expansion are available for UTF-8 charset collation. You can see the contraction and expansion of collation in the collation setting in the LDML file. Using contraction and expansion affects the size of locale data (shared library) and server performance. .. _contraction: Contraction ^^^^^^^^^^^ A contraction is a sequence consisting of two or more codepoints, considered a single letter in sorting. For example, in the traditional Spanish sorting order, "ch" is considered a single letter. All words that begin with "ch" sort after all other words beginning with "c", but before words starting with "d". Other examples of contractions are "ch" in Czech, which sorts after "h", and "lj" and "nj" in Croatian and Latin Serbian, which sort after "l" and "n" respectively. See http://userguide.icu-project.org/collation/concepts for additional information. There are also some contractions defined in `http://www.unicode.org/Public/UCA/latest/allkeys.txt `_ DUCET. Contractions are supported in both collation variants: with expansions and without expansions. Contractions support requires changes in a significant number of key areas. It also involves storing a contraction table inside the collation data. The handling of contractions is controlled by LDML parameters **DUCETContractions="ignore/use"** **TailoringContractions="ignore/use"** in tag of collation definition. The first one controls if contractions in DUCET file are loaded into collation, the second one controls if contractions defined by rules in LDML are ignore or not (easier way then adding-deleting all rules introducing contractions). .. _expansion: Expansion ^^^^^^^^^ Expansions refer to codepoints which have more than one collation element. Enabling expansions in CUBRID radically changes the collation's behavior as described below. The CUBRIDExpansions="use" parameter controls the this behavior. **Collation without Expansion** In a collation without expansions, each codepoint is treated independently. Based on the strength of the collation, the alphabet may or may not be fully sorted. A collation algorithm will sort the codepoints by comparing the weights in a set of levels, and then will generate a single value, representing the weight of the codepoint. String comparison will be rather straight-forward. Comparing two strings in an expansion-free collation means comparing codepoint by codepoint using the computed weight values. **Collation with Expansion** In a collation with expansions, some composed characters (codepoints) are to be interpreted as an ordered list of other characters (codepoints). For example, '?' might require to be interpreted the same way as 'ae', or '?' as ''ae'' or ''aa''. In DUCET, the collation element list of '?' will be the concatenation of collation element lists of both 'a' and 'e', in this order. Deciding a particular order for the codepoints is no longer possible, and neither is computing new weight values for each character/codepoint. In a collation with expansions, string comparison is done by concatenating the collation elements for the codepoints/contractions in two lists (for the two strings) and then comparing the weights in those lists for each level. **Example 1** The purpose of these examples is to show that under different collation settings (with or without expansion support), string comparison might yield different results. Here there are the lines from DUCET which correspond to a subset of codepoints to be used for comparisons in the examples below. :: 0041 ; [.15A3.0020.0008.0041] # LATIN CAPITAL LETTER A 0052 ; [.1770.0020.0008.0052] # LATIN CAPITAL LETTER R 0061 ; [.15A3.0020.0002.0061] # LATIN SMALL LETTER A 0072 ; [.1770.0020.0002.0072] # LATIN SMALL LETTER R 00C4 ; [.15A3.0020.0008.0041][.0000.0047.0002.0308] # LATIN CAPITAL LETTER A WITH DIAERESIS; 00E4 ; [.15A3.0020.0002.0061][.0000.0047.0002.0308] # LATIN SMALL LETTER A WITH DIAERESIS; Three types of settings for the collation will be illustrated: * Primary strength, no casing (level 1 only) * Secondary strength, no casing (levels 1 and 2) * Tertiary strength, uppercase first (levels 1, 2 and 3) From now on, sorting of the strings "Ar" and "?r" will be attempted. **Collation without Expansions Support** When expansions are disabled, each codepoint is reassigning a new single valued weight. Based on the algorithms described above the weights for A, ?, R and their lowercase correspondents, the order of the codepoints for these characters, for each collation settings example above, will be as follows. * Primary strength: A = ? < R = r * Secondary strength: A < ? < R = r * Tertiary strength: A < ? < R < r The sort order for the chosen strings is easy to decide, since there are computed weights for each codepoint. * Primary strength: "Ar" = "?r" * Secondary strength: "Ar" < "?r" * Tertiary strength: "Ar" < "?r" **Collation with Expansions** The sorting order is changed for collation with expansion. Based on DUCET, the concatenated lists of collation elements for the strings from our samples are provided below: :: Ar [.15A3.0020.0008.0041][.1770.0020.0002.0072] ?r [.15A3.0020.0008.0041][.0000.0047.0002.0308][.1770.0020.0002.0072] It is rather obvious that on the first pass, for level 1 weights, 0x15A3 will be compared with 0x15A3. In the second iteration, the 0x0000 weight will be skipped, and 0x1770 will be compared with 0x1770. Since the strings are declared identical so far, the comparison will continue on the level 2 weights, first comparing 0x0020 with 0x0020, then 0x0020 with 0x0047, yielding "Ar" < "?r". The example above was meant to show how strings comparison is done when using a collation with expansion support. Let us change the collation settings, and show how one may obtain a different order for the same strings when using a collation for German, where "?" is supposed to be interpreted as the character group "AE". The codepoints and collation elements of the characters involved in this example are as follows. :: 0041 ; [.15A3.0020.0008.0041] # LATIN CAPITAL LETTER A 0045 ; [.15FF.0020.0008.0045] # LATIN CAPITAL LETTER E 0072 ; [.1770.0020.0002.0072] # LATIN SMALL LETTER R 00C4 ; [.15A3.0020.0008.0041][.15FF.0020.0008.0045] # LATIN CAPITAL LETTER A WITH DIAERESIS; EXPANSION When comparing the strings "?r" and "Ar", the algorithm for string comparison when using a collation with expansion support will involve comparing the simulated concatenation of collation element lists for the characters in the two strings. :: Ar [.15A3.0020.0008.0041][.1770.0020.0002.0072] ?r [.15A3.0020.0008.0041][.15FF.0020.0008.0045][.1770.0020.0002.0072] On the first pass, when comparing level 1 weights, 0x15A3 will be compared with 0x15A3, then 0x1770 with 0x15FF, where a difference is found. This comparison yields "Ar" > "?r", a result completely different than the one for the previous example. **Example 2** In Canadian French sorting by the collation with expansion, accent is compared from end of string towards the beginning. * Normal Accent Ordering: cote < cot¨¦ < c?te < c?t¨¦ * Backward Accent Ordering: cote < c?te < cot¨¦ < c?t¨¦ .. _operations-charset-collation: Operations Requiring Collation and Charset ------------------------------------------ Charset ^^^^^^^ Charset information is required for functions which use character primitives. There are exceptions: :func:`OCTET_LENGTH` and :func:`BIT_LENGTH` do not require charset internally to return the length in bytes and bits. However, for the same glyph (character symbol) stored in different charset, they return different values: .. code-block:: sql CREATE TABLE t (s_iso STRING CHARSET iso88591, s_utf8 STRING CHARSET utf8); SET NAMES iso88591; INSERT INTO t VALUES ('?','?'); -- the first returns 1, while the second does 2 SELECT OCTET_LENGTH(s_iso), OCTET_LENGTH(s_utf8) FROM t; The previous example should be run from console (or a client) with ISO-8859-1 charset. Collation ^^^^^^^^^ Collation is required in functions and operators which involves a comparison between two strings or matching two strings. These includes functions like: :func:`STRCMP`, :func:`POSITION`, LIKE condition, and operators (<,= , >=, etc.). Also clauses like ORDER BY, GROUP BY and aggregates(:func:`MIN`, :func:`MAX`, :func:`GROUP_CONCAT`) use collation. Also, collation is considered in :func:`UPPER` and :func:`LOWER` functions, in the following manner: * Each collation has a default (parent) locale. * UPPER and LOWER functions are performed using the user alphabet of the default locale of the collation. For most collations, the default locale is obvious (is embedded in the name): * utf8_tr_cs ¡ú tr_TR.utf8 * iso88591_en_ci ¡ú en_US (ISO-8859-1 charset) The bin collations have the following default locales: * iso88591_bin ¡ú en_US (ISO-8859-1 charset) * utf8_bin (en_US.utf8 - built-in locale - and handles ASCII characters only) * euckr_bin (ko_KR.euckr - built-in locale - and handles ASCII characters only) There are some generic collations available in LDML. These collations have as default locale, the locale in which they are first found. The order of loading is the locales order from **$CUBRID/conf/cubrid_locales.txt**. Assuming the default order (alphabetical), the default locale for all generic LDML collations is de_DE (German). Charset conversion ^^^^^^^^^^^^^^^^^^ For the three charsets supported by CUBRID the conversion rules are: * General rules is that character transcoding occurs (representation of bytes is changed to the destination charset) - precision is kept, while byte size may change (for variable character data). When changing charset of a column with fixed precision (ALTER..CHANGE), the size in bytes always changes (size = precision x charset multiplier). * Exceptions are: utf8 and euckr to iso88591 - the precision is kept and data can be truncated. The following is an example that you run queries by changing the charset as utf8 in the database that the locale specified when creating DB is en_US(.iso88591). .. code-block:: sql SET NAMES utf8; CREATE TABLE t1(col1 CHAR(1)); INSERT INTO t1 VALUES ('?'); When you run above queries, the data of col1 is truncated because '?' is two bytes character and col1's size is one byte. The charset of database is iso88591, and the charset of input data is utf8; it converts utf8 to iso88591. .. _collation-setting-impacted-features: Collation settings impacting CUBRID features -------------------------------------------- LIKE Conditional Optimization ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The **LIKE** conditional expression compares patterns between string data, and returns TRUE if a string whose pattern matches the search word is found. As already proven above, when using a "collation without expansion support", each codepoint will receive a single integer value, representing its weight in the comparison process. This weight value is computed based on collation settings (strength, casing etc.). Due to the fact that characters can always be regarded as single entities, trying to match a string with a pattern using the **LIKE** predicate is equivalent to checking if the string can be found in a certain range of strings. For example in order to process a predicate such as ''s LIKE 'abc%' '', CUBRID will first rewrite it as a range restriction for the string "s". "s LIKE 'abc%'" means that "s" must start with the string "abc". In terms of string comparison, this is equivalent, in expansion-free collations, with "s" being greater than "abc", but smaller than its successor (using the English alphabet, the successor of "abc" would be "abd"). :: s LIKE 'abc%' ¡ú s ¡Ý 'abc' AND s < 'abd' (if using strictly the English alphabet) This way, the actual interpretation of **LIKE** is replaced with simple comparisons, but "Collations with expansion support" behave differently. To compare strings when using such a collation means comparing the concatenated lists of collation elements for each codepoint or expansion, level by level. For more information about comparing strings on the collation with expansion, see :ref:`expansion`. If the **LIKE** predicate rewrite method is kept the same as in a collation with no expansion support as above example, the comparison result can be wrong. To ensure the right query result, the **LIKE** predicate rewrite method is ran differently as the below example. That is, the **LIKE** predicate is added as a filter to exclude the wrong data which can be added in a collation with expansion. :: s LIKE 'abc%' ¡ú s ¡Ý 'abc' AND s < 'abd' and s LIKE 'abc%' (if using strictly the English alphabet) Index Covering ^^^^^^^^^^^^^^ Covering index scan is query optimization, in which if all values in query can be computed using only the values found in the index, without requiring additional row lookup in heap file. For more information, see :ref:`covering-index`. In the collation without casing, for two strings values, 'abc' and 'ABC', only one value is stored in the index(this is either 'abc' or 'ABC' depending which one was inserted first). As a result, the incorrect result may happen when at least two different strings produce the same sort key in a given collation. For this reason, for all UTF-8 collations with strength level less than 4 (quaternary), the index covering query optimization is disabled. This is controlled by strength="tertiary/quaternary" in tag of collation definition in LDML. It should be considered to set this level as maximum strength, because the quaternary strength level requires not only more memory space and bigger size of the shared library file, but also string-comparison time. For more information about collations, see :ref:`collation`. Summary of CUBRID Features for Each Collation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +---------------------+---------------------------------------------+------------------------+ | Collation | LIKE condition kept after rewrite to range | Allows index covering | +=====================+=============================================+========================+ | iso88591_bin | No | Yes | +---------------------+---------------------------------------------+------------------------+ | iso88591_en_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | iso88591_en_ci | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_bin | No | Yes | +---------------------+---------------------------------------------+------------------------+ | euckr_bin | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_en_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_en_ci | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_tr_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_ko_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_gen | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_gen_ai_ci | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_gen_ci | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_de_exp_ai_ci | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_de_exp | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_ro_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_es_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_fr_exp_ab | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_ja_exp | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_ja_exp_cbm | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_km_exp | Yes | No | +---------------------+---------------------------------------------+------------------------+ | utf8_ko_cs_uca | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_tr_cs_uca | No | Yes | +---------------------+---------------------------------------------+------------------------+ | utf8_vi_cs | No | Yes | +---------------------+---------------------------------------------+------------------------+ | binary | No | Yes | +---------------------+---------------------------------------------+------------------------+ Viewing Collation Information ----------------------------- To view the collation information, use :func:`CHARSET`, :func:`COLLATION` and :func:`COERCIBILITY` functions. The information of the database collation can be shown on db_collation system view or :ref:`show-collation-statement`. Using i18n characters with JDBC =============================== CUBRID JDBC stores string type values received from server using String and CUBRIDBinaryString objects. String objects uses UTF-16 internally to store each character. It should be used to store any string DB value except those having binary charset. CUBRIDBinaryString uses byte array and should be used to store database string values of binary charset. The data buffer of each string value received from server is accompanied by the charset of the value from server. The charset is either the charset of column or expression's result or system charset for any other string without correspondent in database. .. note:: In previous versions, the connection charset was used to instantiate JDBC String objects from database values. Create table with one column having UTF-8 charset and the other of binary charset: .. code-block:: sql CREATE TABLE t1(col1 VARCHAR(10) CHARSET utf8, col2 VARCHAR(10) CHARSET binary); Insert one row (the second column has random value bytes): .. code-block:: java Connection conn = getConn(null); PreparedStatement st = conn.prepareStatement("insert into t1 values( ?, ? )"); byte[] b = new byte[]{(byte)161, (byte)224}; CUBRIDBinaryString cbs = new CUBRIDBinaryString(b); String utf8_str = new String("abc"); st.setObject(1, utf8_str); st.setObject(2, cbs); st.executeUpdate(); Query the table and show contents (for binary string - we display a hex dump, for other charsets - the String value): .. code-block:: java ResultSet rs = null; Statement stmt = null; rs = stmt.executeQuery("select col1, col2 from t1;"); ResultSetMetaData rsmd = null; rsmd = rs.getMetaData(); int numberofColumn = rsmd.getColumnCount(); while (rs.next()) { for (int j = 1; j <= numberofColumn; j++) { String columnTypeName = rsmd.getColumnTypeName(j); int columnType = rsmd.getColumnType(j); if (((CUBRIDResultSetMetaData) rsmd).getColumnCharset(j).equals("BINARY")) { // database string with binary charset Object res; byte[] byte_array = ((CUBRIDBinaryString) res).getBytes(); res = rs.getObject(j); System.out.println(res.toString()); } else { // database string with any other charset String res; res = rs.getString(j); System.out.print(res); } } } Timezone Setting ================ Timezone can be set by system parameters; a **timezone** parameter which is set on a session, and a **server_timezone** parameter which is set on a database server. For details, see :ref:`timezone-parameters`. A **timezone** parameter is a parameter about a session. This setting value can be kept by session unit. :: SET SYSTEM PARAMETERS 'timezone=Asia/Seoul'; If this value is not set, it follows **server_timezone**\'s setting as default. A **server_timezone** parameter is a parameter about a database server. :: SET SYSTEM PARAMETERS 'server_timezone=Asia/Seoul'; If this value is not set, it follows OS's setting. To use timezone information, timezone type should be used. For details, see :ref:`timezone-type`. When timezone is set by a region name, it requires a separate timezone library, To use an updated library which has a changed timezone information, not an installed timezone information, timezone library should be compiled after timezone information is changed. The following is an example to compile a timezone library after updating a timezone information through IANA (http://www.iana.org/time-zones). For details, see the following description. .. _timezone-library: Compiling Timezone Library -------------------------- To use a timezone by specifying a timezone region name, timezone library is required. This is provided as default when CUBRID is installed. By the way, to update a timezone region information as a latest one, timezone library should be compiled after updating a recent code which can be downloaded in IANA (http://www.iana.org/time-zones). The following is a process to update timezone library as a recent one. At first, stop cubrid service and operate the following process. Windows ^^^^^^^ 1. Download the recent data from http://www.iana.org/time-zones . Download the linked file in "Latest version"'s "Time Zone Data". 2. Decompress the compressed file to **%CUBRID%/timezones/tzdata** directory. 3. Run **%CUBRID%/bin/make_tz.bat**. **libcubrid_timezones.dll** is created at the **%CUBRID%/lib** directory. :: make_tz.bat .. note:: To run **make_locale** script in Windows, one of Visual C++ 2005, 2008 or 2010 should be installed. Linux ^^^^^ 1. Download the recent data from http://www.iana.org/time-zones . Download the linked file in "Latest version"'s "Time Zone Data". 2. Decompress the compressed file to **$CUBRID/timezones/tzdata** directory. 3. Run **make_tz.sh**. **libcubrid_timezones.so** is created at the **$CUBRID/lib** directory. :: make_tz.sh Timezone library and database compatibility ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The timezone library built by CUBRID includes an MD5 checksum of the timezone data it contains. This hash is stored in all databases which are created with that library, into column **timezone_checksum** of **db_root** system table. If timezone library is changed (recompiled with newer timezone data) and the checksum changes, the CUBRID server, nor any other CUBRID tool cannot be started with existing databases. To avoid such incompatibility, one option is to use **extend** argument of the **make_tz** tool. When this option is provided, the **timezone_checksum** value of database is also changed with the new MD5 checksum of the new timezone library. The extend feature should be used when you decide to use a different version of timezone library from the IANA site. It does two things: - It generates a new library by merging the old timezone data with the new timezone data. After the merge, all the timezone regions that aren't present in the new timezone database are kept in order to ensure backward compatibility with the data in the database tables. - The second thing that it does is to update the timezone data in the tables in the situation when backward compatibility could not be ensured in the first phase when the new timezone library was generated. This situation can occur when an offset rule or a daylight saving rule changes. When you run **make_tz** with the extend option all the databases in your database directory file (**databases.txt**) are updated together with the MD5 checksum. There are some corner cases: - There is the situation when multiple users share the same **CUBRID** installation and an extend is done by one of them. If that user doesn't have access rights on the files that contain the databases of the other users, those databases will not be updated. After that, if a different user for whom the update wasn't made will try to do an extend on his or her databases, he or she will not be able to do this because the checksum of the library will be different from the one in his or her databases. - Also, if the **CUBRID_DATABASES** environment variable has a different value for some users, they will have different **databases.txt** files. In this situation, currently, the update of all the databases at once will not be possible. There would be two solutions for this with the current implementation: - Each user grants access to the folders that contain his or her databases and also the **CUBRID_DATABASES** variable must have the same value. - If this is is not possible, then we can do the following for each user except the last one: - Backup the current timezone library and the **databases.txt** file - Delete from the **databases.txt** file all the databases except for the ones of the current user - Run the extend - Restore the **databases.txt** file and the timezone library For the last user the single difference is that only the **databases.txt** file should be backed up and restored. In Linux: :: make_tz.sh -g extend In Windows: :: make_tz.bat /extend Usage of timezone data types with JDBC ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ JDBC CUBRID driver is completely dependent on CUBRID server for timezone information. Although, Java and CUBRID uses the same primary source of information for timezone (IANA), the names of regions and timezone information should be considered as incompatible. All CUBRID data types having timezone are mapped to **CUBRIDTimestamptz** Java objects. Using JDBC to insert value with timezone: .. code-block:: java String datetime = "2000-01-01 01:02:03.123"; String timezone = "Europe/Kiev"; CUBRIDTimestamptz dt_tz = CUBRIDTimestamptz.valueOf(datetime, false, timezone); PreparedStatement pstmt = conn.prepareStatement("insert into t values(?)"); pstmt.setObject(1, ts); pstmt.executeUpdate(); Using JDBC to retrieve value with timezone: .. code-block:: java String sql = "select * from tz"; Statement stmt = conn.createStatement(); ResultSet rs = stmt.executeQuery(sql); CUBRIDTimestamptz object1 = (CUBRIDTimestamptz) rs.getObject(1); System.out.println("object: " + object1.toString()); Internally, CUBRID JDBC stores the date/time parts of **CUBRIDTimestamptz** object into a 'long' value (inheritanted through Date object) which holds the number of milliseconds elapsed since 1st January 1970 (Unix epoch). The internal encoding is performed in UTC time reference, which is different from Timestamp objects which uses local timezone. For this reason, a **CUBRIDTimestamptz** object created with the same timezone as Java local timezone will not hold the same internal epoch value. In order to provide the Unix epoch, the **getUnixTime()** method may be used: .. code-block:: java String datetime = "2000-01-01 01:02:03.123"; String timezone = "Asia/Seoul"; CUBRIDTimestamptz dt_tz = CUBRIDTimestamptz.valueOf(datetime, false, timezone); System.out.println("dt_tz.getTime: " + dt_tz.getTime()); System.out.println("dt_tz.getUnixTime: " + dt_tz.getUnixTime ()); .. _char-data-conf-guide: Configuration Guide for Characters ================================== Database designers should take into account character data properties when designing the database structure. The following is the summarized guide when configuring aspects related to CUBRID character data. Locale ------ * By default, en_US gives best performance. If you have a plan to use only English, this is recommended. * Using UTF-8 locale will increase storage requirement of fixed char(CHAR) by 4 times; using EUC-KR increases storage 3 times. * If user string literals have different charset and collation from system, query strings will grow as the string literals are decorated with them. * If localized (non-ASCII) characters will be used for identifiers, then use an .utf8 locale * Once established the UTF-8 charset for DB, it is best to use a LDML locale (this ensures that identifier names containing most Unicode characters are correctly cased) than a system locale. * Setting a locale affects also conversion functions(intl_date_lang, intl_number_lang). * When you set the locale during creating DB, there should be no concern on charset and collation of string-literals or user tables columns; all of them can be changed at run-time (with :func:`CAST` in queries) or ALTER .. CHANGE for a permanent change. CHAR and VARCHAR ---------------- * Generally, use VARCHAR if there are large variations in actual number of characters in user data. * CHAR type is fixed length type. Therefore, Even if you store only English character in CHAR type, it requires 4 bytes storage in UTF-8 and 3 bytes in EUC-KR. * The precision of columns refers to the number of characters (glyphs). * After choosing precision, charset and collation should be set according to most used scenarios. Choosing Charset ---------------- * Even if your text contains non-ASCII character, use utf8 or euckr charsets only if application requires character counting, inserting, replacing. * For CHAR data, the main concern should be storage requirement (4x or utf8, 3x for euckr). * For both CHAR and VARCHAR data, there is some overhead when inserting/updating data: counting the precision (number of characters) of each instance is more consuming for non-ISO charsets. * In queries, charset of expressions may be converted using :func:`CAST` operator. Choosing Collation ------------------ * If no collation dependent operations are performed (string searching, sorting, comparisons, casing), than choose bin collation for that charset or binary collation * Collation may be easily overridden using :func:`CAST` operator, and :ref:`COLLATE modifier ` (in 9.1 version) if charset is unchanged between original charset of expression and the new collation. * Collation controls also the casing rules of strings * Collations with expansions are slower, but are more flexible and they perform whole-word sorting Normalization ------------- * If your client applications send text data to CUBRID in decomposed form, then configure **unicode_input_normalization** = yes, so that CUBRID re-composes it and handles it in composed form * If your client "knows" to handle data only in decomposed form, than set **unicode_output_normalization** = yes, so that CUBRID always sends in decomposed form. * If the client "knows" both forms, then leave **unicode_output_normalization** = no CAST vs COLLATE --------------- * When building statements, the :func:`CAST` operator is more costly than :ref:`COLLATE modifier ` (even more when charset conversion occurs). * :ref:`COLLATE modifier ` does not add an additional execution operator; using :ref:`COLLATE modifier ` should enhance execution speed over using :func:`CAST` operator. * :ref:`COLLATE modifier ` can be used only when charset is not changed Remark ====== * Query plans printing: collation is not displayed in plans for results with late binding. * Only the Unicode code-points in range 0000-FFFF (Basic Multilingual Plan) are normalized. * Some locales use space character as separator for digit grouping (thousands, millions, ..). Space is allowed but not working properly in some cases of localized conversion from string to number. .. note:: * In 9.2 or lower version, user defined variable cannot be changed into the different collation from the system collation. For example, "set @v1='a' collate utf8_en_cs;" syntax cannot be executed when the system collation is iso88591. * In 9.3 or higher version, the above constraint no more exists. Guide for Adding Locales and Collations ======================================= Most new locales and/or collations can be added by user simply by adding (or changing) a new (existing) LDML file. The LDML files format used by CUBRID are derived from generic Unicode Locale Data Markup Language (http://www.unicode.org/reports/tr35/). The tags and attributes which are specific only to CUBRID can be easily identified (they contain a "cubrid" into the naming). The best approach to add a new locale is to copy existing LDML file and tweak various setting until desired results are obtained. The filename must be formatted like cubrid_<*language*>.xml and be placed in the folder **$CUBRID/locales/data/ldml**. The <*language*> part should be a ASCII string (normally five characters) in IETF format (https://en.wikipedia.org/wiki/BCP_47). After creating the LDML file, the <*language*> part string must be added into CUBRID configuration file **$CUBRID/conf/cubrid_locales.txt**. Note that the order in this file is the order of generating (compiling) locale library and loading locales at start-up. The **make_locale** script must be used to compile the new added locale and add its data into the CUBRID locales library (locale in **$CUBRID/lib/**). The LDML file is expected in UTF-8 encoding, and it is not possible to add more than one locale into the same LDML file. Adding a new locale in LDML file requires: .. "and currency" is removed because MONETARY is deprecated. * to specify calendar information (CUBRID date formats, name of months and week days in various forms, names for AM/PM day periods). CUBRID supports only Gregorian calendar (generic LDML specifies other calendar types which are not supported by CUBRID). * to specify number settings (digit grouping symbols) * providing an alphabet (set of rules for how letters are upper-cased and lower-cased) * optionally, some collations can be added * also optionally, console conversion rules for Windows CSQL application can be defined LDML Calendar Information ------------------------- * The first part consists in providing default CUBRID formats for **DATE**, **DATETIME**, **TIME**, **TIMESTAMP**, **DATETIME WITH TIME ZONE** and **TIMESTAMP WITH TIME ZONE** data type conversion to/from string. This formats are used by functions :func:`TO_DATE`, :func:`TO_TIME`, :func:`TO_DATETIME`, :func:`TO_TIMESTAMP`, :func:`TO_CHAR`, :func:`TO_DATETIME_TZ`, :func:`TO_TIMESTAMP_TZ`. The formats elements allowed depend on data type and are the ones used for :func:`TO_CHAR` function (:ref:`Date/Time Format 1 `). Only ASCII characters are allowed in the format strings. The allowed size are 30 bytes (characters) for **DATE** and **TIME** formats, 48 characters for **DATETIME** and **TIMESTAMP** formats and 70 characters for **DATETIME WITH TIME ZONE** and **TIMESTAMP WITH TIME ZONE**. * The requires to specify the names for months in both long form and abbreviated form. The allowed size are 15 (or 60 bytes) for abbreviated form and 25 characters (or 100 bytes) for normal form. * The requires week day names in both long and abbreviated form. The allowed size are 10 characters (or 40 bytes) for abbreviated form and 15 characters (or 60 bytes) for full day name. * The sub-tree requires to define the string for AM/PM format variants (according to type attribute). The allowed size is 10 characters (or 40 bytes). The months and week-days names (in both long and abbreviated form) must be specified in Camel case format (first letter upper case, the rest in lower case). CUBRID checks only the maximum allowed size in bytes; the size in characters is computed only for full-width UTF-8 characters (4 bytes), so it would be possible to set a month name having 100 ASCII-only characters (the 25 characters limit is when each character from month name is encoded on 4 bytes in UTF-8). .. LDML Numbers and currency information .. currency part will not be used because it is used only in MONETARY, and MONETARY will be deprecated. .. keep the currency related info as comments for the future.(if currency part is revised, it's explanation can be used again.) LDML Numbers information ------------------------ * The tag defines the characters used as symbols for splitting decimal part from integer part in numbers and for grouping the digits. CUBRID expects only ASCII characters for these symbols. Empty of space character is not allowed. CUBRID performs grouping for 3 digits. .. below is commented out. .. * The tag defines the three letter ISO 4217 code (http://en.wikipedia.org/wiki/ISO_4217) for default currency of the locale. Only one currency for locale is allowed. Setting the currency, instructs CUBRID how to interpret conversions were MONETARY data type is implied (:func:`TO_NUMBER`). .. CUBRID supports a limited number of currencies. +------------------------+----------------------+ | Currency ISO symbol | Meaning | +========================+======================+ | USD | dollar sign | +------------------------+----------------------+ | JPY | japanese yen | +------------------------+----------------------+ | GBP | british pound | +------------------------+----------------------+ | KRW | Korean won | +------------------------+----------------------+ | TRY | turkish lira | +------------------------+----------------------+ | KHR | cambodian riel | +------------------------+----------------------+ | CNY | chinese renminbi | +------------------------+----------------------+ | INR | indian rupee | +------------------------+----------------------+ | RUB | russian ruble | +------------------------+----------------------+ | AUD | australian dollar| +------------------------+----------------------+ | CAD | canadian dollar | +------------------------+----------------------+ | BRL | brasilian real | +------------------------+----------------------+ | RON | romanian leu | +------------------------+----------------------+ | EUR | euro | +------------------------+----------------------+ | CHF | swiss franc | +------------------------+----------------------+ | DKK | danish krone | +------------------------+----------------------+ | NOK | norwegian krone | +------------------------+----------------------+ | BGN | bulgarian lev | +------------------------+----------------------+ | VND | vietnamese dong | +------------------------+----------------------+ | CZK | Czech koruna | +------------------------+----------------------+ | PLN | Polish zloty | +------------------------+----------------------+ | SEK | Swedish krona | +------------------------+----------------------+ | HRK | Croatian kuna | +------------------------+----------------------+ | RSD | serbian dinar | +------------------------+----------------------+ LDML Alphabet ------------- These allow to define casing rules for alphabet of the locale. The 'CUBRIDAlphabetMode' attribute defines the primary source of data for characters. Normally, this should be set to "UNICODEDATAFILE", values which instructs CUBRID to use the Unicode data file (**$CUBRID/locales/data/unicodedata.txt**). This file must not be modified, any customization on certain characters should be done in LDML file. If such value is configured, all Unicode characters up to codepoint 65535 are loaded with casing information. The other allowed value for `CUBRIDAlphabetMode` is "ASCII" which will lead to only ASCII character can be lower case, upper case or case-insensitive compare in matching functions. This does not affect CUBRID's ability to support all UTF-8 4 bytes encoded Unicode characters, it just limits the casing ability for characters not included. The casing rules are optional and apply on top of the primary source of character information (UNICODEDATAFILE or ASCII). CUBRID allows to define upper casing rules ( tag) and lower casing rules ( tag).Each of upper and lower casing rules set consists for pairs of source-destination ( = source, = destination). For instance, the following defines a rule that each character "A" is lower cased to "aa" (two character "a"). .. code-block:: xml A aa LDML Console Conversion ----------------------- In Windows, the console does not support UTF-8 encoding, so CUBRID allows to translate characters from their UTF-8 encoding to a desired encoding. After configuring console conversion for a locale, the user must set prior to starting CSQL application the codepage of the console using 'chcp' command (the codepage argument must match the 'windows_codepage' attribute in LDML). Conversion will work bidirectionally (input and output in CSQL), but is only limited to Unicode characters which can be converted in the configured codepage. The element is optional and allows to instruct CSQL how to print (in which character encoding) the text in interactive command. The 'type' attribute defines the conversion scheme. The allowed values are: * ISO: is a generic scheme in which the destination codepage is a single byte charset * ISO88591: is a predefined single byte scheme for ISO-8859-1 charset (the 'file' attribute is not required, is ignored) * ISO88599: is a predefined single byte scheme for ISO-8859-9 charset (also the 'file' attribute is not required) * DBCS: Double Byte Code-Set; it is a generic scheme in which the destination codepage is a double byte charset The 'windows_codepage' is the value for Windows codepage which CUBRID automatically activates console conversion. The 'linux_charset' is corresponding value for charset part in **LANG** environment variable from UNIX system. It is recommended to use native CUBRID charset in Linux console. The 'file' attribute is required only for "ISO" and "DBCS" values of 'type' attribute and is the file containing the translation information (**$CUBRID/locales/data/codepages/**). LDML Collation -------------- Configuring a collation is the most complex task for adding LDML locale in CUBRID. Only collation having UTF-8 codeset can be configured. CUBRID allows to configure most constructs specified by UCA - Unicode Collation Algorithm (http://www.unicode.org/reports/tr10/) including contractions and expansions, but the properties for the collation are mostly controlled via the 'settings' attribute. A LDML file can contain multiple collations. Collations can be included from external file using the 'include' tag. The 'validSubLocales' attribute of 'collations' tag is a filter allowing to control locale compilation when external collations (from external files) are included. Its values can be either a list of locales or "*" in which case the collations in sub-tree are added in all locales from which the file is included. One collation is defined using the 'collation' tag and its sub-tree. The 'type' attribute indicates the name for the collation as it will be added in CUBRID. The 'settings' tag defines the properties of the collation: * 'id' is the (internal) numeric identifier used by CUBRID. It is integer value in range (32 - 255) and is optional, but is strongly recommended that an explicit unassigned values is set. Please see :ref:`collation-naming-rules`. * 'strength' is a measure of how strings compare. See :ref:`collation-properties`. The allowed values are : * "quaternary": different graphic symbols of the same character compare differently, but different Unicode codepoints may compare equal. * "tertiary": graphic symbols of the same character are equal, case-sensitive collation. * "secondary": case insensitive collation, characters with accents compare different * "primary": accents are ignored, all characters compare as the base character. * 'caseLevel': special setting to enable case sensitive compare for collations having strength < tertiary. Valid values are "on" or "off". * 'caseFirst': order of casing. Valid values are "lower", "upper" and "off". The "upper" values means upper case letters are ordered before the corresponding lower case letter. * 'CUBRIDMaxWeights': it is the number of codepoints (or last codepoint + 1) which are customized in the collation. Maximum value is 65536. Increasing this value increases the size of collation data. * 'DUCETContractions': valid values are "use" or "ignore". When "use" - enable CUBRID to use in the collation the contractions defined by DUCET file ($CUBRID/locales/data/ducet.txt) or ignoring them. * 'TailoringContractions': same as previous but refers to the contractions defined or derived from explicit collation rules. Enabling contractions leads to a more complex collation (slower string compares). * 'CUBRIDExpansions': allowed values are "use" or "ignore" (default) and refers to usage of collation expansions from both the DUCET file and tailoring rules; This has the most influence on collation properties. Enabling it will result in a compare with multiple passes (up to collation strength) when comparing strings. Also it greatly increases collation data, with the benefit of obtaining a more "natural" sort order. See :ref:`expansion`. * 'backwards': "on" or "off": used to obtain "french" order by performing an end-to-start compare on secondary level (for accents). It has effect only when 'CUBRIDExpansions' are enabled. * 'MatchContractionBoundary': "true" or "false". This is used in collation having expansions and contractions to configure behavior at string matching when a contraction is found. The main data for a collation is loaded from the DUCET file. After this step, the collation may be customized using "tailoring rules". These are the "" (LDML) and "" (CUBRID specific). The 'cubridrules' tag is optional and can be used to explicitly set weight values for a codepoint or a range of codepoints. The cubridrules apply after loading the primary collation data from DUCET file and before applying the UCA rules (from '' tag). Each of these rule is enclosed in '' tag. If the rule refers to only one Unicode codepoint, then a '' tag is provided which contains the hexadecimal value of codepoint. All available CUBRID collations contain this cubrid-rule: .. code-block:: xml 20 [0.0.0.0] This rule says that weight values (UCA defines four weight values per collation element) of the codepoints starting with 20 (which is ASCII space character) are all set to zero. Since there is no '' tag, the only codepoint affected is 20. In CUBRID, space character compares as zero. The allowed tags inside of a '' rule are: * '': rule to set the weights for single codepoint. * '': rule to set the weights for single character. Similar to previous one, but instead of codepoint it expects a Unicode character (in UTF-8 encoding). * '': rule to set the weights for a range of codepoints. This is the starting codepoint. * '': rule to set the weights for a range of characters. This is the starting character. In this context, the order of characters is given by their Unicode codepoints. * '': end codepoint for a range rule. * '': end character for a range rule. * '': weight values to set (single value). The weight values are expected in hexadecimal. Each collation element has four values which are delimited by point and enclosed by square brackets([]). There can be up to 10 collation elements. * '': starting weight values to set for a range. Optionally, there is 'step' attribute of this tag, which sets the increasing step after each codepoint. By default the step is [0001.0000.0000.0000], which means that after setting the first weight values for the starting codepoint, one value is added to primary level weight and set to the next codepoint in range, and the process is repeated until end codepoint. Examples: .. code-block:: xml 0 20 [0.0.0.0] 30 39 [30.0.0.0][30.0.0.0] The Rule 1, sets for codepoints ranging from 0 to 20 (including) the weight values 0. The Rule 2, sets for codepoints ranging from 30 to 39 (which are the digits), a set of two collation elements with increasing weights; In this example, codepoint 39 (character "9") will have the weights with two collation elements [39.0.0.0][39.0.0.0]. The '' tag is also optional but is according to LDML and UCA specifications. The meanings of sub-ordinates tags are : * '': anchor collation element. It defines the reference to which subsequent rules (up to next ``) are tailored. It can be a single characters or multiple characters in which case is either a contraction or an expansion. By default, all the tailoring rules after the anchor are sort "after" (element from first rule is after the anchor, element from second rule sorts after element in first rule); if the optional attribute "before" is present, then only the first rule after the sorts before the anchor, while the second and the following rules resumes the normal "after" sort (element in second rule sorts after element in first rule). * '

': the character comes after (or before, if the anchor had the `before` attribute) the previously tailored one at primary level. * '': the character comes after (or before, if the anchor had the `before` attribute) the previously tailored one at secondary level. * '': the character comes after (or before, if the anchor had the `before` attribute) the previously tailored one at tertiary level. * '': the character sorts identically to previous one * '', '', '', '': same as '

', '', '', '' but applies to a range of characters * '': specifies the expansion character * '': specifies the second character of expansion. * '': specifies the context in which a rule applies. A variant to specify contractions and expansions. For more information on UCA tailoring with LDML rules see http://www.unicode.org/reports/tr35/tr35-collation.html. .. rubric:: Footnotes .. [#f1] glyph: an element for the shape of a character; a graphic symbol which indicates a shape or a form for a character. Because a glyph specifies the shape which is shown, several glyphs about one character can exist. They went down to the water-side to try the effects of a bath in the surf as it rolled in from the Pacific Ocean. They found it refreshing, and were tempted to linger long in the foam-crested waves. Near by there was a fishing-place, where several Japanese were amusing themselves with rod and line, just as American boys and men take pleasure in the same way. Fish seemed to be abundant, as they were biting freely, and it took but a short time to fill a basket. In the little harbor formed between the island and the shore several junks and boats were at anchor, and in the foreground some smaller boats were moving about. There was not an American feature to the scene, and the boys were thoroughly delighted at this perfect picture of Japanese life. It was sea-life, too; and they had island and main, water and mountain, boats and houses, all in a single glance. "For our sick soldiers!" "Yes, I'm going to take that away with me to-day." "I destroyed it. There was no object in keeping it. I tore it up then and there and pitched it on the pavement. The motor was driven by a dumb man, who conveyed me to the corner house. It struck me as strange, but then the owner might have returned. When I got there I found the man subsequently murdered suffering from a combination of alcoholic poisoning and laudanum. It was hard work, but I managed to save him. A Spanish woman--the only creature besides my patient I saw--paid me a fee of three guineas, and there ends the matter." fatherly letter--but frank! He said he saw from the address that I ¡°Who are you, please?¡± Sandy shot the question out suddenly. He had gone back. "You'll come through all right," said the Surgeon smiling. "You're the right kind to live. You've got grit. I'll look at your partner now." 'he Took Another Look at his Heavy Revolver.' 254 But after the gun was gone, and after Shorty had written a laborious letter, informing Sammy of the shipment of the gun and its history, which letter inclosed a crisp greenback, and was almost as urgent in injunctions to Sammy to write as Sammy had been about his piece of ordnance, Shorty sat down in sadness of heart. He was famishing for information from Maria, and at the lowest calculation he could not hope for a letter from Sammy for two weeks. The firing and stone-throwing lasted an hour or more, and then seemed to die down from sheer exhaustion. Odiam had triumphed at last. Just when Reuben's unsettled allegiance should have been given entirely to the wife who had borne him a son, his farm had suddenly snatched from him all his thought, all his care, his love, and his anxiety, all that should have been hers. It seemed almost as if some malignant spirit had controlled events, and for Rose's stroke prepared a counter-stroke that should effectually drive her off the field. The same evening that Rose had gone weeping and shuddering upstairs, Reuben had interviewed the vet. from Rye and heard him say "excema epizootica." This had not conveyed much, so the vet. had translated brutally: "I don't ask for that to-night¡ªall I ask is food and shelter, same as you'd give to a dog." "Yes, yes, we will consider of some more fitting answer," said Leicester fiercely;¡ªand after consulting earnestly for a few minutes with Jack Straw, Thomas Sack, and other leaders, he returned to De Vere, and said¡ª HoMEÃâ·ÑÍøÕ¾Ò»¼¶ÊÓÆµ²¤ÂÜÃÛ ENTER NUMBET 0018www.skilyman.com.cn
wxyonyou.com.cn
wushejia.com.cn
www.duixianma.com.cn
gtxe.com.cn
moogle.com.cn
dropx.com.cn
www.xchw.com.cn
www.iandy.com.cn
www.swbus.com.cn

迷人的浪屄 重口味av 狂操小姐特爽吗 美女屄照片 仓井空片子 潘号 美女大胸色图 美女xingjiao18p 妇女小穴图片 c168b91e000357e6 丝袜骚妇爱爱 操逼图1234 制服诱惑五月四房间 美鲍图下载 亚洲猛女性交图片150p 欧美做爱15图片 草妈妈小姨子穴 刘亦菲无码艳照图 少女和狗性交 裸模陈丽佳 WWW.73XH.COM WWW.22222SE.COM WWW.QOEDU.ORG WWW.NNNN16.COM WWW.KYFDZ.COM WWW.CCC156.COM WWW.GUOXUE.COM WWW.YLQWX.COM WWW.TFKRQ.COM WWW.MAODH.COM WWW.BBB184.COM WWW.2323AV.COM WWW.JAVCHIP.COM WWW.BX2500.COM WWW.SYFX168.COM WWW.XIEEGUO.COM WWW.DSYQ.COM WWW.2651.CN WWW.222128.COM BAWRIBOON.CHANREUA WWW.N9B3.COM WWW.999KPW.COM WWW.ZG99.COM WWW.HHH689.COM WWW.SJ1718.COM WWW.AIDJE.COM WWW.7777RR.COM WWW.BOBO138.COM WWW.QQHJY.COM WWW.5ITAOTU.COM WWW.576586.COM WWW.44SJSJ.COM WWW.464HP.COM WWW.CX765.COM WWW.CCC559.COM WWW.H3AA.COM WWW.998QQ.COM WWW.696EE.COM WWW.752HH.COM WWW.PLAYQTU.COM WWW.HWXNT.COM WWW.BST818.COM WWW.WJABBS.COM WWW.YSTS8.COM WWW.JXAXS.COM WWW.96ZIE.COM WWW.161XS.COM WWW.74KD.COM WWW.VERISIGN.COM WWW.16KK.COM WWW.YNSERVER.COM WWW.QDH100.COM WWW.520SU.COM WWW.CENTOSCN.COM WWW.CCC680.COM WWW.ES60.COM WWW.AXQ2.COM WWW.CK180.COM WWW.38TVTV.COM WWW.KDSMIT.COM WWW.4455FF.COM WWW.HBYPHG.COM WWW.Y5ZN.COM WWW.111FN.COM ILYA.IOSIFOV WWW.YJSBB.COM WWW.9877YX.COM FABRIZIO.FERRACAN WWW.3344G.COM WWW.SJYHOME.COM WWW.984AA.COM WWW.PPAV1515.COM 青青草注册会员 chengrenseqingyingyuan 亚洲丰满大肥屁股视频 欧美少女OOXX svs视频精品分享视频 719BBcom AV制服丝袜 13yn_com wwwOOSIHUCOMmagnet 一本道处女A片视频 抽打妈妈的淫穴15p 播色网1 三邦车视欧洲 成人动漫阿凡达 成人综合社区免费视频www5xsq1com 乱伦家庭一级a片在线播放 四虎影库必出精品浅仓彩音 干老太太wwwjiujiu6pcom wwwonlyjizzcom 连裤袜成人 sq小说网站 欧美色图就爱啪啪啪 亚洲内衣振动器magnet 午夜剧场成人av专场 自拍av射射 伦理琪琪影院a 天天电影网吧 wwwxixi5566 女人下部真实图片 淫荡的黄蓉校园春色 校园春色古典武侠在线视频 雨宫琴音小朋友 武侠古典长篇连载黄色笑话校园春色性爱技巧淫妻交换 干法国小姑娘 童颜巨乳港台三级视频 调教家政妇serious漫画在线观看 wwwvom色小姐 赵薇评价古巨基 男生舔女生bb的照片 东京热苍井空QVOD aluoli5社区 wwwaicbbcom 校花跟我xxx 制服美女小色网 黄色视频机哇插屁股 俄罗斯三级排行 啪啪影院破解版免费 wwwyoujizzjizzcom 三圾片都有哪些 都市激情校园春色家庭乱伦文学 网盘迅雷五月 文爱小说h xfplay丝袜美腿 www3324444comwwwwoaisikucom av爱爱aaa 久久打炮性生活 性爱偷情乱伦做爱操逼 熟女炮友自拍在线视频 裸成人AV 美国成年免费网站 葵司三级片 38p0pocom XxPen成人免费视频 色播五月情天 亚州女性自慰视频wwwjiujiu6pcom 九九色大姐 免费在线视频色小说无广告 欧美下体艺术写真 www4hu41 能见到毛的免费日本电影 影音免费观看欧美A片 japanese12在线homesextopnetsextopnet 给个网站你们懂的2017 久草热av在线小说 q1se 爆操白嫩人妻p 72bao在线 天天拍天天操天天撸 avtt2020 幼女av电影资源种子 3D台湾香港经典三级日本 国产成年人Av影院 琪琪影院金瓶梅 亚洲色~ 性爱合成图片 www5060lucom 尤物人妻小说 张柏芝艳照门在线手机 妓院虐待美女图 3W699UUUCOM wwwyy4480org亚洲AVqq 裸妇熟女 古装一级无码A片王昭君rmvb av空姐蕾丝 dizhi99妇科检查 AV区亚洲AV欧美男同AV 迅雷蚂蚁泰国女优 小骚逼操操操啊啊啊 大鸡巴逼 孙子狂草奶奶 wwwaa456com 9115sss视频在线 少妇少男同奸 3p夫妻交换做爱 狼人综合网123 wwwkkkk25 外国幼女网址 成人视频在线看青青草在线看 狗和人的伦理电影 qiuxis6com 大胆操逼BB人体 月光宝盒官网bai5acn 最新黄色口交图 成人老婆在线性爱视频 不要vip的完整黄片 色和尚香蕉撸 熟妇人妻在线av pornxxx幼女 裸体赌过程 三级片满清十大酷刑 在车上被强嗯啊不要120p wwwxbxbcon 真实偷拍初中生厕所12p 东方av亚洲图片欧美图片bbcc789com 澳门博彩裸体艺术图片 成人动画理论片 亚洲欧美日韩强奸 妓女学校妓女小说 LELEHEIUSmagnet 轮奸文章 淫妻按摩录 色爱影视 幼女嫩穴下载 超碰免费动漫视频 狠狠干胖妞 大机巴饶了我吧视频 人体艺术摄影动漫 幼女色吧 狼人艹综合 先锋资源av色撸 影视操逼的人生 亚洲性爱www8oyxcom 五月姬情 美女姐姐15p 亚洲欧洲校园另类心灵捕手 欧美亚洲激情小说色图另类 妈妈被我t 欧美肥婆性爱 亚zhou色图 人妻好吊视频在线观看 萝莉裸下体 磨豆腐漫画 同学妈妈随便逼操 2B姐姐的色图 偷拍自拍下载 色色色色色小说激情 平井玛利亚 女仆 尻绳 欧美性爱偷拍自拍古典武侠 欧美另类幼女做爱 在线手机播放器 伊人88综合图片网 有声小说每天撸一撸 麻生希第一部快播 制服诱惑亚洲图片 风月恶之花剧情介绍 ckck爱情电影网 女优美弥藤无码先锋影音 戌人快播电影 欧美肥臀影视 着a片 2017宝妈手机在线看片 欧韩久久视频 成人小说3q 成人ob快播 母子乱伦爱爱在线mmissno1com 干网在线清洗流程图 骑淫妻网 wap9999akcom ye3333com 暴插大奶少妇 Yinse人妻 看黄色网站主持人大屁股大逼图 swwwbu370comhtmindexhtm 亚州强爆乱伦视频 亚洲激情性 色莱坞免在线视频 美女粉色AV 极品夫妻大白屁股 丝袜护士的逼 987资源网 丝袜妹子穿丝袜wwwyymeizicom 交换人妻番号 人妖重口味另类文学 综合征服色的站 人妖性爱高潮图片 巨乳幻想第二集 殴美色妇1024 AV作品免费视频 www淫色色淫com奇米综合网 44kkmm` 啊鲁啊奴 我抽插巨乳女教师 守望先锋av动画在线 日本中文a漫 激情五月狠狠干 美国搞师傅 porn91明星 a片哪里可以下载地址 wwwribiavcom W66cccom 瑜伽口交无码成人自拍 欧美人与XXX 亚洲有码偷拍制服丝袜 www115cdcom aV天堂手一 男女全裸舔逼高清片 啊撸啊了嗯 女自拍揉胸视频大全 西方黄色成人三集A片 撸春 超碰爸爸闯进来 亚洲清纯美女性爱炮图片15p 日本黄色电影成年网 射精女性xxx 乱伦熟女人妻在线 波多野结衣床上被操 夜夜撸在线视频暗暗撸在线视频加多撸在线视频天天撸在线视频 淫荡啊浪叫啊操我 国产谷露1 册册XX 美日韩A片在线观看mp4 WWW55ybybC0m下载 夜夜撸wwwkanav05comwwwqbiryfrdoizcn 3344EVCOM 足交手机在线 美女直播videombaiducom 久久在线视频精品99re6wwwbs799com6655acom 中国成人av视频 羞涩涩 av红楼梦在线播放 动漫套图第一页 经典干幼幼小说mchinadmdcomwwwgzyunhecom 捆绑色色色 美女艺术照无码 www凤凰网con 在线播放成人网 偷拍自拍强奸乱伦要在线观看 250pp6wpin 闹洞房就去干 乱伦电影狠狠撸hhh600com 巴扎黑电影 操老师嫰穴 日屄短文 女性阴体艺术图147 熟女沙织漫画 白虎小穴被早 张悠雨叉开腿露b高清 偷看屄的故事 黄色动画图片小说 插菊花色色 幼奸吧 WWW_ZZZ13K_COM 肏笓片 我把就舅妈给操了 进进出出美少女 人与动物交配哪里者 影音先锋激情综合网 超清躶体美女写真 53kkk亚洲小说 杨永晴ed2k 张莜雨人体艺术致意 抠屄淫色网 zipailaobishipin 国模茶雪大尺度高清局部私处人体艺术图片 小女孩逼尿屄图片 南宁极品美女 欧美 性交图片 113rr WWW_RIRIHEI_COM 夫妻性交真像视频 a片强奸系列快播片 k8经典伦理 韩国女主播朴妮唛的黄色小说 和妈妈在火车上做边爱 色 WWW_WW789MM_COM 那个爽啊 出卖女友刘晴小说 干逼空姐电影 av销量第一的番号 撸管色中色 水果干影院 电影 高清口交图 ed2k佐佐木 黑木耳验收材料 老奶奶跟孙子做爱电影 弟弟和姐姐乱伦 147日本人大胆子女艺术图 黄色漫画书免费3级 一个20岁的凶虎 进爆欧美性爱 中国老女人草比 苍井空av百度网盘高清 亚洲美女论坛 与淑女乱伦 婷婷五月性 少男裸体人体艺术 尼钭空 龚玥菲xiaos 欧美裸体私处照 制服丝袜松下美由纪 调教日本女优 幼幼撸片 侏儒女 � aikojiaocom 32式夫妻性姿势动态图 xingaidexiaoshuo 美女一丝挂全一裡图 护士嫩b50p 第四成人黄色网站 京野结衣电影 漂亮女人下载 小色迷ge 美女车模操逼 西西自拍波多野结衣 欧美动态成人视频 女人一天最多能承受多少次性爱 五月天天色小说 越南女优 外国成人频道在线 欧美裸体影 丁香裸图 入江辉美在线电影 葵花牌壮腰健肾片 广末凉子成人片被性侵 美女裸体人体大胆脱小内内露b穴毛图 成人大胆摄图 亚洲色图 筱 和公公乱伦的女人们 少妇美嫁娘 中国人做爱自拍偷拍视频 银虎导航网银虎新地址 田韩a片 我姐尻屁片 儿子强奸继母图 美女全裸劈腿被奸照片 月野沙里qovd 护士不穿内衣要我的大鸡巴操她逼 经典性爱图片 精品少妇被老外硬刺无码吞精 qvod 套动换妻 色色艹妹妹 免费成人网8888 中华人民共和国行政诉讼法 暑假危机夏令营修改器 彩色面条机 兴业银行赤道银行 大朗启明星初级中学 孟锦云照片 俄罗是做爱 巨乳超美熟妇人妻 gegeyaocaobidianying 郑州私人影院装修 pornsoraaoi 小姐操逼照片删除删除 非洲美女大胆人体艺术照 weiriguobidejiba 狠撸撸蜜桃 klk第一主双插双枪爆菊 毛逼网站 做爱图片激情小说 WWW260TUCOM 大尺度裸体操逼图片 制服美女淫影院 射在里面的照片16p 美鲍色图操b 学生尻妣视频 www色中色在线色com 张柏芝艳照偷拍 美女性爱过程图 同志小说一直男干了我和老婆 日本少女乳头人体艺术照 美女用手掰逼套图 日韩在线自拍在线 各地美女人体艺犬 农村黄 好看的乡村熟妇小说 父与女偷摸乱伦 我和少女做爱视频 我插入女老师体内 东京热无需播放器 快播一本道a波多野姐姐 淫秽网站专区 水里狠插屄 丝袜骚妇人妻乱伦 酒色1413 大胆人体艺术导航 快播孕妇影片 欧美自拍偷拍下载 oumei在线长篇小说 WWWKE47COM 章子怡淫照 操阴快片 宋祖英大胆人体艺术 香港成人免费电影艳照门 首页52avzycom mac版淫色网站 美女被马强奸小说 综合色站影音先锋 儿子操妈妈bi 影院里那些男孩的秘密txt 免费在线3p口交图片 亚卅大胆人体艺术摄影 美国性感波吧 强奸丝袜美女的视频 为什么上不去成人电影网了 张幼女大胆美女人体艺网 韩国限制片迅雷种子下载 女乱15p 骚逼大咪咪高潮了 亚洲色区成人电影图片 与老师性交 农夫导航人体艺术 日本丝袜片子下载 3p性感尤物内射她的小骚穴 粉红亚洲妹偷拍自拍 风情谱女公关第10集下 黑人大战亚洲骚货 五月天婷婷乱伦图书 美女图片大胸删除 屄草垮 黄色glf 黑人强奸熟女影音先锋 足x番号 插逼里动态图 妹妹干偷拍私拍自拍性爱色图 激情美丽小骚屄 温州女友做爱视频 迅雷黑人a片下载 美女邪恶裸露人体艺术 日本黄seibt 苍井空擦b 第9影院在线观看 丰乳肥臀迅雷 WWWWW87XXOOCOM 快播大陆少女1级片 丝袜足交老师小说 国产视频熟女系列 女人与大黑狗真实插入播放器 农夫成人电影人与兽交 淫崎千鹤动画 五月天乱伦色色的有哪些小说 工具插入小穴 鸡巴图片网 怒火街头在线视频 快播日本a优 久操偷拍自拍 18女人人体 lu二哥男人影院 零度战姬ed2k 性感日本古典黄色图集 姐妹情室光棍 日木三极片 激情亚洲色图先锋 动慢成人 伊人在线琪琪色酒色网 朋户区av视频 肉丝裤袜跳蛋小说 丝袜美腿新感觉要操逼网 日本老奶性视频在线观看 爆乳欲室在线观看 狼人干综合新地址HD播放 saiavloanmagnet 性启蒙番号 老淫哥 搞清大片人人看欧美 亚洲美穴15p 彩漫哥哥的sex计划 av成人视频干妹妹 xx妹妹 裸体毛网站 亚洲幼女口交电影 亚洲色偷拍亚洲图片18m18vecom 国产自拍韩国综合 三级片嫩模做爱视频成人片 激情乱伦三级片 女性做爱人体艺术 狼国网成人小说网 我和妹妹在浴室里操 大姨姐与妹夫乱伦偷情 妻子蜜穴 夫妻自拍和朋友做爱偷拍自拍全部视频 overthumbspenis图 酒色狼国成人 干呦呦图片 裸体漏阴茎阴道性交流k精 狼国3p图 父女奸淫荡 pom黄片 小说区黄色小说 av磁力网站 www美女sex 米奇影影 新婚开苞 日本大胸熟女妈妈在线视频 色就是色欧美图片亚洲图片7eavcom 强插小嫩妹 撸撸火影同人 wwwmumu50comuc 就爱啪欧 女教师成人电影网 亚洲色图制服丝袜骚秘书 金麟岂是池中物无修版 久久热人兽交配 成人电影成人网站成人小说网 模特公车痴汉在线 mama姐弟cao oojizz 成人性交免费一级黄片 日本漂亮的av无码网址 全裸无码无内衣照片 女王专区 夜i夜i骑 欧美五月香婷婷 黄色三级片爱色色哥哥 同性恋黄色小说小卖部 人体艺术照sadrine私房美女图 MP4视频亚洲日韩色就是色欧美setu avHS动漫网站 艹比视频免 成人诱惑网 丝袜淫母 最新幼交自拍上传视频 丝袜美女说我爱被你插在线播放 男屁眼被曰小说 男男性爱想双飞XX网 av女苍井空大胆人体 插射进去操综合网 先锋影音av资源站馒头 女婿让我疯狂 wwwhaorenshuoc 空港magnet 249hh快播电影 70岁操屄 色影院 另类有声小说 小泽玛利亚码新作 小泽玛利亚vod 哪里有h网 www小沈阳网com www松原油区二中 东京热antianshaji 东京热妇女 酒色网电影 手机现在怎么看黄片 去哪看黄片 想看黄色小说 彩美旬果 国产自拍 就爱插比 就去黄色 性爱用品 苍井空与狗 广播五月天 淫荡色小说 007成人网 12点文学网 人间风月影院 台灣佬娛樂網 唐伯虎成人站 淫妹妹激情图 163色电影网 在线电影性乐汇色高清 桃花族 撸特特 篱笆女人和狗电视剧 一本道avt天堂网大香蕉 好吊妞视频 mp4 亚洲欧美动漫在线字幕 淫色人妻午夜 色色色999韩在线播放 狼人干综合亚洲av 外兔网黄色网 DVDES- -664 mp4 av在线东方影库 免费看片美女 久久爱一本道 老鸭窝伊人无码不卡无码一本道 亚洲日韩色偷高猫咪 丁香五月综会缴猜小说 美祢藤コウ影院 吉泽明步 极道之妻 先锋 舒淇三彶在线观看 鲁大妈Xo 裸条在线观看神马在线 女捜査官西条琉璃在线 IPTD 853 在线 色色影视插插综合网 AV成人电影天堂 泷泽萝拉无码 ftp 大波浪综合成人网 欧美手机成网站 绝色福利导航视频 200bbb 国产自拍 视频 - 色色风-成人在线视频首选! 祼露毛片 jlzzjlzz现在播放 靠逼动漫影音 自拍经典迅雷下载 91大神扬风回馈网友支持,邀请网友一起3P白嫩性感的大奶女友,前半夜操完睡觉后 佐藤佳代无码 妹妹影院福利片 秋霞电院在线情侣偷拍 性8春暖花开论坛亚洲区 兄妹番号 新疆梦郎车震3P轮战50老逼 青青草免费在线播放 全裸正面叉视全裸正面叉视频 亚洲aV 性感皮裤高跟36D爆乳骚货情人性欲高涨 边看AV边暴力淫操 高潮迭起 浪叫不断 使劲 校园春色视频 求一个免费看成人直播的网站 小片网站 日韩无码国产精品欧美激情 日本无遮拦搞基视频 在线偷拍女厕正面视频 日韩丝袜写真视频 影音先锋看片资源q814@ 圆挺奶 在线观看美肉流刑地 天堂鸟影院西瓜 幼女自拍 超频免费视频12 剧情演绎在线 春野樱黄 萝莉做爱 日本午夜大片免费下载 欧美一本道字慕v 曹逼逼 成人影院d 91apt cn/mc taosege 玉足亲吻袜袜屋关注的视频 曰本人男女做爰视频 人休艺术张伯 用啥看考比视频下载 福利视频伊一 广州sm论坛 泰国古装A片 下载 五月福利视频导航 宅男福利成人在线 亚洲 视频偷拍 五月色播 loli h movie 下载大鸡巴视频资源在线播放 火星打接触在线免费观看 诱惑自拍 爱情鸟做爱视频 丝袜全包挑战视频采集 灵猴福利影院下载 三级做爱网站视频 少妇寂寞与狗迅雷下载 下载 被大鸡吧塞满的视频 99福利视频在线 大香蕉一本道富二代 李宗瑞在线午夜福利 欧美艳舞在线播放XXX yy6080影视觉影 黄色小视频一 公海 日本 美女 MP4下载 操你啦在线视频综合视频 亚州福利色 苍老师唯一流出视频看 女神自慰磁力下载 殴美黄色视频 g0g0图片大全 big acg 杨丽箐丝袜 美山兰子 富姐搭讪坐顺风车被 爆乳天堂 麻酥酥哟完整视频 magnet 美教师地狱责问 免费啪啪free 女生自拍在线 蜜桃直播脱内内 欲忘岛在线免费视频 迅雷下载 双穴 高分影视盒视频播放器 色爱亚洲五月天 wwwuuu7777com 无翼鸟少女全集 美莉喷奶截图 深夜影院免费打炮无码电影 台湾理论一本道电影 四库影院在线看的免费视频 午夜影院色琪琪 初川南 xfplay 2018黄色网站仼你搞不一样的搞法 caositayingyuan 曾舒蓓 w午福利人 91大神经典作品酒吧认识 青青草在线自拍综合 韩国操逼现场 jux886在线观看 樱井莉亚97集在线播放 t66y上榴人士 4444国产自拍 农夫av片导航电影 足控电影有哪些 三级riri看 亚洲强奸乱伦比比资源站 欧美破苞手机在 韩国可疑的美容院电影斑马影院 4438x2最新 韩国演艺圈1313在线观看 网站升级反问 xfyy卡通动漫 弱气乙女 浴室套图 成人资源AC av在线直播 韩国 年轻妈妈在瑜伽室做爱 最后还在家里的电影 www,5x脳,coM 8055神马电影 六点成人 波波视频成人影院 av插进去 看片+v信 www路AV av网站国产在线观看 俺去啦怎么打不开在 神马综合天堂 疯狂做爱dvd xianfengAV 德国老太AAA视频免费一览 第3页自拍tp 大香蕉狼人3d动漫 大香蕉新人人现 飞机福利视频导航 福利gof 浮淫视频 对白精彩淫荡风骚眼镜熟女妈妈与干儿子淫乱直播吊钟大奶妈妈是真的骚被爆操内 xx4s4scc 噜噜ccc 88电影于是院 18ansvideosdese ov 日本成人黄色动漫视频 大香蕉高清网站在线 高富帅小鲜肉微信约炮粉嫩小穴童颜美眉啪啪啪 青青草小穴视频 小奶屄 成人隔壁老王网站 国产综合5x视频在线 白虎嫩滑视频 激色猫欧美系列大香蕉 求黄片百度云链接 韩国女主播种子下载 magnet 波儿影院 99rehd。cc 欧美视频1随机播放 人人妻在线人人 成人av所有网址 天天re99 俺播 视频二区 亚洲 欧美免费 涩琪琪手机原网站 一冢本av中文字幕 wwwAV手机 朴妮唛福利全集霸气村 xxxooo日韩 御工黄色视频 色色播播,四四播播 ai美女鲍鱼b 今日推荐视频更新于 ------------------------ 《幼幼》两个18岁日本学妹背着 日本公开在线无码视频 182福利大香蕉影院 sosonn 在线漫画 午午西西影院 好看的吃奶av番号 eee119改域名 口交视频app 小黄瓜免费的福利视频 梦莉视频导航 馒头B紧身裤视频热舞 国产2017自拍视频 国产 偷拍 日韩 欧美 9丨福利社区你懂的 xxxx日本免费视频高清无码 【正在播放 [MXGS-754] 別顏空姐 麻生希[中文字幕] 第1集_日本av电影 av网站 青青草韩国演艺圈悲惨 0077cao改成什么了 丁香花开心五月手机在线 51av影院 醉地艾迪东方 秋霞影院www,eeussyy 守望先锋AV动漫版 影音先锋 兽兽门2008在线 理论片中文动漫dvd 午夜无码影院百度 亚洲色农夫Av 狼狼在线观看免费 新人主播性感长腿小安妮,情趣内衣火辣热舞,粉嫩骚逼激情自慰,呻吟可射,精彩不要错过第二弹 射精王 成人短片库m4格式 被窝全黄色片 妹控ova 日日操无码视频 香丁六月婷香丁网 河南露脸超嫩 日日在线 在线偷拍福利视频 一本道插逼逼 美女破处 大空美绪 午夜av影院 手机版 色尼姑影灰灰影院 日韩护士丝袜无码迅雷看看 免得黄色视频 伊甸园李丽莎福利在线 日本偷拍免费高清视频, 3344动画伦理片 图片小说 西瓜影音 黄色视频小说网站 番号水杯里面下药 WWW4T4F 日本女人喷潮完整视频 撸大爷影院 日必在线播放 91在线福利影院 最美人妻女教师 苍井空 Z影院。 gvg464中文字幕 成年轻人网站色直接看免费 电影颐和园耒删版在线播放 男人天堂在线资源tb hqnxiucao ffeex日本女孩 7k7k成人网 柳州莫菁视频12部全集 有点婴儿肥的清纯巨乳小学妹 资源影院tom51西瓜 9uu18 con 369看片永久免费 star-527古川ぃぉり 泰国天皇秀视频高清 757午夜视频第28集 极品F罩杯二次元狂热少女女生寝室场景视角自拍视频 51avi免费视频新地址 小明中文字幕免费视频在线观看 想看老外日屄的视频 性爱 视频 性交细节抽插视频 邪恶少漫画大全3d全彩欧美 鸭王2abc影 性交视频中国 小雪小视频bd 邪恶全彩无码ac漫画大全 JAVHIHIHI视频 神马影院dy88 福利 松果儿你懂的 幼nv fanhao App爱波成人影院 艺术片 伦理片 国产精品 od369韩国漫画 阿v在线观看免费 天堂 ananshe 日本东凛视频在线 成人电影午夜剧场a 开心丁香综合缴情网 任你在干线 玖玖色北条麻妃 av小说秘 欧美脚交在线视频foot 巴西美女按摩视频 色色哒福利 绝色老人轮奸波多野结衣 操妞视频播放 she一夜做爱视频 日亚洲欧美牛b叉电影 CcCC77欧美性爱 s第一福利 岛国激情片 我的老师黄片 hd东京热无码视频 成人性感动漫xxx 午夜影院xo暴爽影院 想要零用钱妹妹 素股 春节来历 催乳成人福利视频在线观看 亚州色图片成人插入视频 淫色草民电影 1027 核工厂 down xp 黄色视频高潮 播放s级毛片 日本成人性视频 日本厕所偷拍视频tub 摸射你在线视频 四虎尻屁影库 群橹大香蕉一本道dvd 白石さゆり 司机 亚里沙tsdv 41636在线 白白操在线免费观看 人人澡人人漠大学生 擼擼色綜合 黄色网站2117 芥麦色片 五月天亚洲网站 国外a片成人网 北山柑菜 中文字幕 被知道丈夫隔着魔镜 美容 国产在线导肮 苍井老师成人视频免费下载 猜谜系列的AV 波兰性交比赛视频 草莓午夜视频在国产 北京熟女楼凤群交 视频 国模超级福利在线 av导航大全 白肉淫 成人快播有声毛片 凹逼美女 老太太影院 黄播龙虾直播 陌陌约炮视频在线观看 自拍 欧美福利图片 800AV最新地址 未封av网站 蝌蚪窝窝在线观看 台湾怡红院 2018av国产手机在线视频 肉嫁高柳 magnet 澳门avav pppd424正在播放 向井蓝AV天堂 rbd浣肠 ts国产视频大全 wanz226mp4 vidz老湿影院 RIBENSANJIXIANGGANGSANJIHUANGSEWANGZHAN sl深夜福利 叉开大腿b我要添视频 rhj 228 播放 北川瞳在线高清 成人啪啪碰在线视频 RHJ-228 savk10 屄片小 se86视频在线观看 波多野结衣在家线观看 renrenmoshiping 美女主播露全身视频 プレステージ在线电影 国内偷看在线 93gao免费视频 绫波世娜磁力迅雷链接 老婆被 后入 91 KTDS-681 在线播放 ipx-072播放 电影我被人强奸 伦理片冫 日本女人,淫荡视频 干b乱视频 女同a片 9l国产自拍 吉吉影音冲田杏梨 乳胶xo影院 国内夫妻自拍tu视频 俺去啦在线不要播放器 橘佑金短视频 在线亚洲 欧美综合网 可以看的手机小视频 AV福利人妖 爱爱黄业视频 足交丝控漫画 海贼王黄版视频在线观看 2222老司机福利 女人与马干 thunder 日本三级,韩国三级,香港三级黄色视频在线观看色就是色 黄色丝袜小视频 美女主播仙桃福利视频ck kan3p cn在线电影 在线内射 mp4 后λ视频 国产精品在线视频Chinese 被大阴茎插哭的经历 偷拍 自偷 亚洲 在线 678dvd yy4408 伦理电影 牛牛视频露脸 黑人碰碰视频在线观看 1234色视频 韩国美女ⅴiρ神马视频 日本高清做爱无码视频网站 黑丝白肉我珍藏的av女优 春暖花开性吧有你亚洲无码 骑兵射福利一本道电影 无码嘿嘿嘿种子 波多野结衣末剪版在线观看 操呦呦777 仓多真央电影在线 国产自拍视频 yunfile 韩国最新网红主播福利视频大秀在线 国产在线偷录叫床 国产足j在线观看 韩国女主播福利导航 国模嘉妮大尺度视频 淫荡网欧美性交 免费性感a片野视频 熟母乱伦在线 pppd424在线播放午夜剧场 0 d恋老视频 SM 自拍 免费下载 747看看福利午夜影院 qingqingcaohaodiaosi 5566夜色在线 吐痰推荐,国外收费作品高品质CG动画51V整合1 四虎影院wap 1做爱 日本 中国人 小老汉在线视频 182tv-人人草-大香蕉-av在线 欧美激情日本视频 极品H片 我爱你AV52 亚卅无码最大视频 午夜影院费试看一分钟 84papa av网站800东方在线 日本三级香港三级成人网自拍在线观看 哪可以看工口 种子番号全集 ftp 村上丽奈番号 天天操天天玩 创造女生脱裤子洗澡 灰灰福利dianshioing 伦理图片二本道 国产自拍在线网页 在线 熟 激情邪恶大香蕉 九洲做爱视频 伊人久久五十路 一次肯德基哄骗邻家 十大暴力番号 第一福利成人在线 裸体女人的福利 伦理片天天射 美女被调教视频网址 乱伦交配视频 美国十次人与兽 美女91影院下载 乱伦片 下载 乱交在线视频 琪琪影院 黄页网站大全免费视频酥酥影院 全国最大成人网4438x9 青青在线自拍频vip 极品女神级网红美女可爱小胖丁和长屌土豪酒店约炮 白皙皮肤 经典欧美推荐第一页 吉迟明步1313 姐弟亲嘴啪啪啪视频 精子窝最新永久视频 青娱乐首页 耄耋视频亚洲 伦理巨乳速发福利 4438x是啊 伦理电影在线观看自慰 曰本lAV视频 yuojizz中国熟女 紫藤·伊莉娜h动画 偷拍广东情侣野战视频 大香蕉520 去便利店的途中在车内发情的妹妹 国产重口在线 不橹三二一 琪琪在线va 日本性交真灬 奇米第四春影视盒 百合情韵在线观看 唐山葬本子 动漫里番在线 se0107com 午夜伦天堂理影院 水滴摄像头偷拍 青虹资源搜索网站 双性人妖系列在线 mc小仙儿喊麦mp3 zzaa1 海瑟格拉汉姆大尺度电影 javhdvideo15,18岁 深喉 视频 @ simulant 黑人插pp 24p动态视频 黄瓜视频网站在线播放 女主播赫本磁力 好看的欧美无码中文字幕 护士被羞辱调教 一级黄片XxXx 桃园怜奈磁力 人 妻 少妇 很恨鲁在线视频播放 日本老熟妇性欲 wankz小黄鸭 国产自拍 黑丝诱惑 欧美性爱免播放器在线播放 后入 夫妻 自拍 视频 夫妻一级黄色录像片子 亚洲无码自拍 成人两性拍拍红番阁 四虎女友 皮皮虾无码 肛虐泥鳅在线看 邪恶里番漫画 四虎伦理手机在线 仔仔网 九州 国产自拍 av 动漫番号种子 韩国女主播迅雷磁力链 特殊视频百度云资源 ooxxoum 国产自拍偷拍操逼视频 女同偷拍自拍 秘密网av 石原莉奈电影院痴汉 26uuu小说 magnet 猫咪热播喷奶视频 婷婷丁香色 东京热哥av 女主播BT 在线成人av影院 视频 女神思瑞在线播放p 我与憧憬的太太中文字幕 mm无码 扣逼免费视频 亚洲一本道免费观看看 艾薇儿口爆百度云盘 wwwtoutouyaocom 韩色漫app 国外操逼磁力链接 下载 免费GAY片在线播放 男生福利影剧院 好逼免费电影 国内丝足熟女视频 内裤哥郭静在线 动漫XXOO视频在线观看 澳门自拍偷拍视频 丝袜伦理中文版 蛋蛋剧场骑士影院 涉谷果步sdde 绝色毛片无遮挡 无码性爱视频播放器 91超频碰人人在线 午夜视频是看体验区30区 伦理片 在线听书 午夜玩bb视频 国产 日韩 中文 自拍 8x在线成人 在线视频 手机视频在线日本 宅男啪啪福利 汤姆影院 avtom_四虎_四虎影库 欧美三级电影 甜蜜的乐团 AV超级搜索 猫咪maomi永久发布 日本X××oo 921影院 av隔壁老王每日更新在线 窝窝炮 西瓜影音 www99bb9com 福利前线影院 三上悠亚xz 插萝莉影院 1315影院 久久口交插萝莉影院 神父AⅤ动漫在线观看 wwwdd324con 亚洲风情 国内自拍 日韩 208午夜福利手机在线 欧美畜生伦理 wwwsavk18com 西西里高清模特艺术图 有声huangse小说 哥迅雷 女人操逼打炮 兰桂坊黄色电影 三邦车视网图片 小姨子的丝情袜意3p 妈妈跟狗搞 狂干黑丝足交美少妇 天天操欧美图片 日本黄片用避孕套 幼女小说集 日本乱伦狠狠插图 luxiaorenchaobi2008 樱井夕树无码 插得好爽在线小说 性爱女子会所小说 漂亮美眉床战色图 八匹狼娱乐社区成人 三级片神马影院 WWW_49979_COM 同性挤奶 亚洲人人体艺术大胆照片生殖器官 WWW_8090YYYY_COM 张篠雨人体艺术大胆照 女色幼rmvb 欧美老女人性爱电影 免费的皇瑟图片 幸富配电子书 赣榆淫妻 就去caobi 1314seqingwang 极品白富美爆操15p 亚洲成人影音先锋 很狠爽欧美 撸撸鸡巴射屄了 我要看明星淫伦的图片 自拍在线网 欧美av人体图片 看裸体漏阴道 飘花电影网乱伦小说 成人艺术色天空 撕开女友丝袜做爱 15岁少女撸撸色 西西人体粉色 japan sex woman 亿性家成人综合社区 少妇与老头黄色小说 誉田まみvideo 日本人体赛车大奖赛 骚女无毛逼 小b插的流水了 有哪些欧美同人成人版电影 骚妹影院删除 张瑞草书写法 性爱动态图片15p 姥姥撸一撸姥姥网 次原佳奈美图片 ooxx的日本动画片 父母性生活论坛 偷拍偷窥少妇内衣 裸体美女艺犬 奶妈肉穴 陈冠希功夫影音先锋视频 quanjialuanlun 欧美美女人体艺术动态 美女全裸图片百度 无码时间停止器 李宗瑞哪里又看 刺激抽插爱爱 色亲资源网 成人妹妹 视频 苍井空 时间停止 色老爹小说 小泉彩人体摄影 大骚哥 小色弟在线电影 女儿的嫩穴qvod 人体艺术图片topai 不要播放器鸡吧操进屄 张柏芝舒淇露大屄图 少女美鲍人体写真 av女优超市 淫荡的性爱图片 sex8ccpoweroriginalwestern 狠狠射操逼视频 rosi99 苹果手机迅雷云看片网站你懂的 日本美眉尻骚 16yeye 深圳合租挽妻 鸡巴插逼视频快播 穿越之淫乱小太监 强奸a片艳照 俄罗斯明星人体艺术 成人片a片区内 女人bb图片 五月天第一会所 日本超大胆人体艺术组图 18日本美女张开腿图 亚洲色图 偷拍自拍 我和小姨 20美女嫩穴 日本拳交迅雷 胖妹子人体艺术 真实做爱的快播伦理电影 蝌蚪窝一个释放 另类视频成人激情网址 lunxiaosou 狠狠插电影 苍井空拍过哪些电影 清水凉子summer 超大奶裸体人体图片 俄罗斯乱伦熟女 欧美鸡巴操阴道 吃了春药的女人会有什么反应图片 超大胆裸体黄色图片 日本快伦播电影 厕拍 博客 极度兽性未删减 先锋影院日本模特 美女的乳头人体艺术 陈静仪乳头很大吗 影音先锋韩国偷拍电影 乱伦故事狠狠碰 操妈妈屁眼快播下载 13岁美少女人体艺术图片 波多野结衣被射精图 激情偷拍自拍影音先锋 大鸡巴插入少妇小穴最性感 生命的力量全套照片 找个色阿姨 母子群交乱伦 第1集 幼乳图片 大色哥小色妹淫色网 搜狗裸体女人 夏娃露三点 张柏芝美屄 和美少女做爱 超大美乳妊妇 什么yiemeichuangyifudemeinvtupian chuangshangsheqing 小妹的屄好浪 我给小妞抠屄 淫荡骚姨 弟与妹性交弟阴筋长粗妹更舒服吧 我爱人妖色色 mm五月天影视 仓井空55伦理 男女明星亲吻做爱的电影有哪些 欧美大逼片 粉木耳 两穴 很黄很色好想撸 黄色小说乱伦姐姐 qvod校园偷拍 WWW_CHESSOURGAME_COM 激情狂干风骚高老师 被男老师拍照奸的漫画 欲妇小说 www46xxxcn 绘狗大奶子人体 毛毛片性爱做爱视频 WWW_AVXQL_COM 徐静波 魔兽世界335 十八和谐最新地址 姚双喜 孩子脾气暴躁怎么办 潘益兵新浪博客 阴毛摄影 激情爱爱色成人综合网站 妈妈的逼逼就是给儿子操的 丝足美腿性爱 欧美艺术性先锋影音 腿攥 女人的阴毛都是什么样子 女儿摸爸爸jj摸乳头 WWW_PSW_COM 黑人轮奸故事小说 WWW_258SAO_COM WWW_TSOHU_COMURLJS 日本深夜成人节目下载 崛北真希无码图 0099aaaa 三十七度二极品人体艺妓 天天花逼网 mp4xxoo 给我视频舔逼 夜夜撸美女色图亚洲色图 日本幼幼照片 日本艺术照 看3d动画a片的网站 mmm摄影图片东莞纪实 超级乱伦qvod 爆操少妇内逼网 bdcda061000018b3 操小姨骚穴图 裸体黄色性交片 baihumantoubicao 日本老师av华为网盘 益ど缜? 淫虫操动漫美女 乱伦伦理成人电影在线观看 黄蓉肉棒音影先锋 亚洲少妇门 大奶熟穴 WWW_KKAAA_NET 小说大奶老婆阿云 春药强奸大肉棒插骚穴 18avady 东京热一起撸图片 人体穴位高清大图 美女的丝袜小穴 张柏芝快播图片下载 美女被鬼搞穴 久操b网新闻 京子20歳sm身体改造肛门拳交 成认小说乱伦 狼客导航 我的可爱女友黄色 中国大大奶老太婆 欧美sese老肥熟 色kanav 一丝不挂美女做爱动太 骚姨妈成人网 成人图欧 偷情操逼图 人体艺体虎穴 ed2k艳照门陈冠希 乡村m女草逼小说 操少妇的屁眼 天天干狠狠干 乡下老夫妻扫墓坟前做爱高清偷拍 avhbocom yazhouwuwuma 万全影院在线观看1一 迅雪极品图洁 操欧美少女逼 中学屄吧 岳母乱伦电影 肥佬影音怎么看黄 92jiba 48岁人妻自拍 人与动物交网 欧美美女人体偷拍 小明看看主页永久播放 蜡笔小新1 花裙人体模特写真 强奸孕妇系列先锋播放 日韩美女黄色图片 人与动物xxxx 操逼电影种子 和波多野结衣一起演过口交的 江湖yin娘txt第二部分 男女办公室激战 自拍操六十老女人屁眼 少妇的裸体照片 爸爸插错洞吉吉 人与驴bt 五月色任 我我我xex8 幼女专区 wwshe3com 把鸡巴插入女生屁眼里的动态图片 鲁大妈黄色做爱网站 90后粉嫩馒头b 阴毛留长了好吗小说 萌芭下马 WWWPLXSWCN 操我干我种子 26uuu色酷色 所属分类日韩 幼女阴体 宾馆偷拍p 无码av电影影音下载 伦理片艺术片菅野亚梨沙 小牡蛎13部ed2k 丰乳嫩穴 成人人体色色图 女人阴口是屁股眼吗 无广告的人体艺术 女同成人狠狠插 色尼姑久久草视频 WWW53AVCOM 欧美快播性爱 少女被干影片 一夲道京东热全部电影 PRoN3oo 哥哥插我大鸡吧亚洲色图 土逼成人短视频人与狗激情 在线母女性交 会动的幼幼动漫爱爱图 骚穴内射骚逼 一色百毛片 五月慢菜种植 超大胆艺术摄影 一级全黄色片 少妇添阴自拍偷拍 韩国色人体艺网 成人毛片图 ddd42色妞妞基地 肏骚屄莲莲 yyrt5yy 日本性感美女阴部性交 bbse5在线观看 曰韩美女解禁大图 日本美女1234图片 WWW378PAPACOM 爆操大奶少妇 日本十二岁少女阴部裸体艺术图片 色吧图片快播你懂的 西川结衣百度影音 欧美性脚足交视频 夜夜撸小萝莉露逼高清图 插你妹影音先锋 www1122aqcom 淫色舅妈 亚洲欧美动漫日韩国产 艳照门无码高清照片 乱伦爱情看电影吧影音先锋 亚洲色图老女人13p图片 妹妹冈 色图操bb强奸 超成人免费视频在线 鲁大妈色播网色网 欧美人体巨乳大尺度艺术 艳鬼还魂被奸图片 日逼做爱干逼摸咪咪视频 aika黄图 亚洲学生妺大战西洋枪 赤裸羔羊接吻 群交肛交老婆屁眼故事 天天啪啪久久嫂子 干的护士嗷嗷叫50p15p 中国爱城bt 女尊男子白虎 被医生舔的好爽 bt撸撸色 啊不要舔那里不要吸有声小说 freevideocum 成人天堂av2017www9966acomnv325com 阴道检查H视频 白白色白白撸 成人在线超碰资源网 亚洲欧美丝袜自拍变态 567pp男人最爱上的网 兰州美女买阴qq号是多少 乱伦高潮大叫生殖器特大 操逼在线赌博视频 自拍偷伯乱伦小说 女淫欲网 直线与圆的方程试题 qyuletv青娱乐在线 caopotn尿尿 拷问凌辱潮吹 国内自拍mmagnet 西协美智子人体 百度热javhd 丝袜脚意淫 绿色无毒的av电影网址 激情综合网激情五月肉棒 ppt播放avi黑屏 韩国美女自拍偷拍做爱图片 群撸网magnet 欧美精品有声精品 额我也去撸苍井空 淫嫂乱伦 制服盒子 乱伦小说网址导航 明星宅男 百青青草 另类女同群交小说 色色激情开心无码影音 欧美美女图片网 美女乱伦wwwtb181com 浪妈妈电影院 动漫AV中文字幕迅雷下载链接 亚洲日日干哥哥日日妹妹 老婆和小伙干小说 超碰97人妻办公室 雪白的古典武侠 农村老头做爱视频自拍 校员春色搜狗 性暴力俱乐部 亚洲成人图片偷拍图片小说 欧美seut 情人偷拍第一页 老汉推车g8 猛插淫女骚穴 肥姥姥性交 粉嫩pao 免费大香蕉狼人干伊人久草AV网址 丝袜自拍露脸套图 狼客库 小男孩与塾女乱伦系列小说 色taotu qingjunlu3最新网 亚洲成人网站做爱小视频 成人网站狠狠撸 busx2晓晓 母子乱伦竞赛 骚女孩的禁照 公公大鸡巴太大了视频 撸撸鸡巴kk569com 南国成人网 偷拍裙底玉足 好大的奶好爽妇人 北京ktv抱起小姐狠狠操 青青草wwwxxb29com 绝色美人妖魅力百度 俄罗斯情侣系列 草草飞飞爆菊 久久女同小说 长发高中妹子宿舍自拍 荡妻快乐得公用洞 结婚偷拍10p 妞干母 成人激活网 女人力三级片 最大胆的美女生殖器人体艺术摄影 美国chengrenpian视频 激情网怡春园 a片长图搜一下 avtt一字马 WWW_K_6X9X_COM 电影一苍井空一 美岛玲子 人逼被狗插 潘号 色图偷拍白富美 yese321com 家庭乱伦之人妻 ccc36最新偷偷撸影院 嫂子在厨房 115礼包码变态秦兽交 伊人在线小泽玛利亚 WWW_KKBOSE_COM 图纸上井道是怎么表示的 五月天母子乱伦网 影音先锋最多人性交 � 我要操成人电影 色科蚪 一路向西欧美伦理电影 欧美a片家族乱伦下载 强奸岳母wwwwwww85com 爸爸在隔壁9 小白黄片 熟女多汁20p色中阁 霜姐 m625成人影院 龙腾视频网 波多野结衣裸体人体艺术 wwwshafoucom 亚洲综合图自拍欧美 黑人幼女肛交 写真视频网站免费 人狗红色录像一级带大片 有这骚逼长腿女友操了她 人体室 wwe51yycom 大帝av视频娜娜操总动员 阴布图片大全 xplay在线播放 绝美少女玩内射 wwwgegeganne 夫妻乱伦性爱 皮裤美女10p 色久久色琪琪第四色www978aaacom goo电影777 色狼窝wwwqqqq64com wwwpz599cpm 色小结 纵犬潜伏下载magnet 萌妹国产在线 艳遇传说 av成人在线视频在线超碰网 找黄色激情小说 wwwsihu184com xxootv 偷拍影音先锋电影网 老婆和小姨子们在广场上及野外肏屄 高中女生美鲍 免费大鸡巴猛插美女淫穴视频 艳照门女主角 美色www色mwin4000com 春色网激情区 www535ddcom 日本自慰游戏网 分享翘臀老婆贴吧 自拍在线视频第一页 淫妻色图网站 超碰国产偷拍视频 色妹子综合 常用的黄片 熟女炮友自拍在线视频 搜索www妈妈与儿子乱伦大杂烩 操少妇1024在线视频 动画妹影院 日本无码片百度 家庭黄色一级网 插死我吧啊啊啊好爽 淫色美女图 先锋在线成人片 国外网友偷拍自拍 婷婷中文字幕高清 www110com 被窝福利合集250集 哥哥干公公的奸淫 母子乱伦3d动画 WWW19MMCOM 类似达酷的网站 正在播放露脸熟女后入式最后直接给口爆第11370集偷拍盗摄在线视频五月色 工口成人 [欧美]国产情景剧风骚美女不穿内裤路遇色魔跟踪迷昏后捆绑 资源铯站mv950点cc 美腿丝袜32p 伦理图片亚洲 色狗电影 bbee9966com 怡红院免费视频 以前的66波波网址是 哥去射天天翘 插吧插吧视频在线播放 色和尚亚洲妹 开心鬼黄色片 大爷操影院色色影院撸撸影院宅男影院下载 动漫鬼作 媚药bl 蝌蚪窝AⅤ 亚洲乳头 成人淫荡图 图霸人体艺术模特 老师喜欢玻璃瓶做爱 第四色春色在线电影 亚洲撸撸magnet 大屁股岳母 全聚色 wwwxxoo2com 台湾AV日本 wwwssss47 家庭伦理伦乱 231avav 热片网乱伦小说 wwwnntt999com 影音先锋夜色资源网 激情小说 亚洲色图 欧美 裸体姿势艺术诱惑 脱光操井 东京天堂在线 撸一撸涩涩 44bbmm下载 八戒影院褐色影院成人电影影视 狠狠撸Av图片 极品美女深喉吞精视频观看 日韩av使劲撸撸出激情 白白色大香蕉狠狠插妹妹 成人古典乱伦小说 动漫av迅雷下载 成人摄影色图 日碰碰碰超 成人电影午夜A片 洛天依色图 下载免费的AV小电影 www99aahh 944rr 66美女人体 wwwpubb94com 护士操穴色图 A片39efcom 色一点的小说 母其弥雅小说 我狠狠操风骚丝袜嫂子 无极影院美腿丝袜 韩国电影vip 幼男幼女黄色视频网站 15p熟妇 wwwddd20com 乱伦落红视频 黄网国内 九九视频在线视频观看 cdcd22 色色的妹妹 成人动画理论片 表妹做爱大全 漂亮美眉被肛交 七夕夜在ktv把爆乳女神蝌蚪窝 撸撸管长篇连载 色爱区mp4 五色网 自拍熟女另类亚洲欧美 女将av 虐阴100种 肥逼大妈 影视软件论坛 东方AV在线狠狠撸亚州无码 AV影音先锋影院 老师花核流水 校园春色sex8小说 韩国少妇AV影院 成人小说ftp 色三八点com 高清妹妹先抠逼再让我干 国外黄色视频网址 色福利加油站 操可以播放的站街老鸡视频 我是眼镜控迷奸 wwwcao664comlist42html 超碰成人在线免费高清巨乳视频 av美女天堂下载 影视操逼的人生 乱伦电影一页 人妻熟女自慰文 黄色的乱伦 勾魂欧美av高清影院 身材不错的女友家中诱惑自拍 熟女群p网s 我在厨房插了老师不详 色你妹眯眯 www sqwuyt com 川岛和津实视频 seseav 自拍学生白虎13 X666xx人与兽 淫淫淫涩 欣赏她浑圆的屁股 色色女主播magnet 91porn三级 日本骚妇丝袜视频 搞了两个90后mm 曰逼视频播放器 汇集全球熟女12p 青青草资源啦 淫荡妹妹性奴 激情乱伦校园人妻小说 夫妻在线AV 性与爱图片 欲望丝袜妹妹 啊啊啊好舒服图片 第一社区会所文学地址 色色av一本道加勒比 wwwsek0ngge2c0m 鬼片电影全集国语高清 成人激情图片,电影mmmnn7777 风流媳妇和壮公公 青青草美女自慰视频免费观看 东方Av偷 kkxkkx 黄色淫荡书 车上干骚逼 www268hhcom 我的美腿丝袜女儿 帅哥和保姆啪啪啪wwwshuaijiaocomvideo wwwjjj85 av电影www5yycom 美女久纱野水萌MP4 幼男幼女性爱演戏漫画 淫色淫香照片 美女基情四月 青青草露脸在线视频 啪啪啪全球免费社区 wuyue天婷婷 妞干网2015版www003hhhcom susu30cm 国产另类第二视频 亚洲色图狠狠撸夜夜撸 骚色综合www68ixcom 影音在线久久草 男同在线观看手机免费 天海丽白色吊带 国产视频日韩人兽 欧美性交影院自拍偷拍情人旅馆开房视频 日你色色色色色色色 超碰国产少妇在线 酒瓶门视频在线观看 三级片黄色父女乱伦 深空部落在线观看 第4涩涩 神雕侠侣伦理片 哪个网站可以观看苍井空电影 嗯啊不要嗯不要舔作者不详 四虎影视二级 黑丝袜视频床上秀 WWWSHUANGTVNET 影音先锋偷看媳妇中文 92电影看看 成人电影xox同性 看黄色三级片哦 哒哒哒哒哒嗯嗯嗯嗯嗯嗯 蕾丝边制服诱惑mp4 色无极主页影院跳转 岛国熟女撸撸色 911vv a片无限成人卡通 www892con下载 澳门美女直播间 xf0av2info 超碰吉林 亚洲欧美偷拍制服国产 西西人体美女鲍图片 swww222dzcom 在线访问升级中 黄鳝门ftp y亚洲超碰 丁香五月激情五月狠狠干 激情小说第四页 撸丝电影一区 欧美母子乱论 bbb笫bbb笫四 亚洲制服丝袜BT 冬月枫 欧美幼teenmoⅴies兽交 姐夫我要你的大鸡巴啊嗯嗯嗯嗯啊 开心狠撸五月 鸡鸡爱逼逼av淘宝视频 在线播放成人网 强奸漂亮的人妻magnet ss111亚洲 250pp6wpin 成人A节爽片视频 色俺去 和妈妈一起乱伦九城社区 同志带颜色的漫画网址 WWWASIA4 人妻 韩国av qvod 狠撸15p qingse 肥佬影音 摸乃图片 我与母亲电影无删减版 重温阿娇艳照门无码 军事网 国模菲菲大胆人体艺术 mianfeichengrenshipin 苍井空全集云播 欧洲美女黑丝 ipad去哪网站看色图 绘狗美女主播 处女中出 广濑玲什么时候开始拍av 你有b吗大片网站 长沙天气苍井空口交 婷婷炸天鲁 偷拍明星做爱视屏 cluanlun 兽兽门色图 屄水湿屄毛 嫂子和我看毛片 ryidongwuxiengjiaoshipien 为什么人体艺术模特的阴部都是黑的 欧美金发骚女情趣内衣人体 可爱美女人体艺术 张婉婉做爱白浆都操出来了 90性交视频 日本人体合成艺术 自慰屄 视频色哥哥 新一本道qovd 插妈妈逼婷婷五月天黄色网站 色黄朝 人娇黄色小说 操b 13 p 色五月图库 911sssus主站 2007日本人妖视频 素云佳柔 日本骚女舞 和大姨子日逼之续 网抱妹av 色中色成人av社区 淫荡猫女 幼女激情做爱 wohewodefengsaoxiaoyima 影音乱轮 春暧花开性吧有你 亚洲有码 臀部上放酒瓶的女人是谁 xxx美女逼毛图 义母电影一区 拳交屄屄 第一人体艺术图 好国美女主播系列之苹果 内射护士10p 黑人操逼偷拍自拍 胖女人maopian 午夜色撸撸 ww 5252se 黑白大战15p百度 eluosiqingse 宝莲寺淫僧 人妻乱伦强奸图片 柳州艳照门12部 快播欧洲重口味 嫂子浪叫 成人无码图 动1动少妇 人本艺术图片p01040100 激情小说草裙未满18禁区 偷情主妇 苍井空美女太太 男女激情做爱的高清照片 韩国丝袜美女在家和男友大胆露穴美腿口活自拍视频 强奸人的动态图 吉吉11ff xanggangxiezigongdianyingang 丰满女人求职被我给操了 成熟女人裸体正面照片 男人可以操母狗吗 残酷美吉田月 台湾av色站 大胆西西露阴艺术 26uuu女主播朴妮唛 自拍熟女自慰 日本成人乱伦网站 国产操逼片 姐姐在家做爱自拍 筱田优无码 插入16岁美少女花径 韩国第一美女 影音先锋能用的码你懂的 WWW_SE65_COM 人体牲生活片 模特强奸电影 苍井空电影gif出处 陈佳丽大但照片 韩国情欲电影大全 美女爱爱日 哈起码萝莉幼女片 hhhhhhdpornfree 毕夏漏点 溜冰骚女 faya美女性交18p 色 逼 搔 射 最年轻漂亮苗条的大学女生骗了老板钱被老板吧小穴和人给折磨了的完整版图片 姐姐的阴蒂被我舔硬了 百度影音能打开的黄网 为什么男朋友做爱时喜欢把我插哭 哥哥干动漫图片 色姐姐姐影院 五月 性美女操 最人体大胆女艺术 人体艺术huahuagongzidaohangchengrenwang 干逼520 张筱雨做爱图片 a片合集迅雷合集 马六姐人体艺术 精品少妇被老外硬刺无码吞精 qvod 日本骚女诱惑裸体 妻子推油 草裙女大胆美女迅雷 缺金的男孩名字大全 明星织发 疾病查询 军魂 3322kk 河北省教师资格证网 毒战百度影音 黄粲扒皮 理疗有副作用吗 37tp人体艺术果果6655人体艺术开心色播色尼玛 厕所针孔偷拍屄屄尿尿 女人淫乱图 性交插b淫图 诱色天使是传销吗 www555focom 顶级电影 都有那些人体艺术网站 爱人体高清大胆裸体照 色救救综合网 av内射电影 俄罗斯色图16p 快播电影小孩操大人 汤芳2007后花园 骚屁股骚逼逼 欧美人妖射精视频搜狐视频 av免费图片免费视频 WWW388AACOM 福利搬运工6080青涩 av女美国奶子 大黑鸡巴操韩国美女 欧美女星艺术图片 东北熟女骚 性爱狠撸撸小说 www韩国骚妇 WWW991ASZYCOM 新盖影院 与姐乱伦 五月天我一晚上干了个 谷门吹 怡红院 怡春院美国分院 WWWSANFUCOM 大奶大鸡鸡的美女 婷婷大色窝深爱五月 忠义乾坤之爱子情深40 国产自拍图片区国产区小说区 亚洲色国47 黄色小说激情小说成人小说网 主人性爱竹 波多野结衣av作品图蜜桃网 杜箐箐大胆人体艺体图片 撸熟女骚 插的好狠好舒服 林柏欣性交片 插黑丝骚b 亚洲色网av天堂 性感写真视频下载 女子学校返回途中乱搞6p淫乱大派对02 9iisesese 爽歪歪电影宗合 qingchunmeinvpeike 六月天婷婷 可以用大肥佬看的色网站 操屁眼小说大全 在线视频录像 男人鸡巴艹 rentiluozhao 28揷屄 liuyan裸身图 丁香网站五一婷婷 幽人谷乱伦网 中山市色色婚纱摄影 性裸体舞蹈 美女大白奶子 光棍电影影院yy俄罗斯年青母亲 贵阳哪家药店可以买到万艾可 欧美性交片视频大全 阳具冰棒 小小淫女 干爸爸大鸡吧好棒 99smav 象数疗法 学妹开苞疼哭视频观看 杉田瞳i~淫若妻妊妇 苍井空百度影音人与动物 mp3性交故事在线播放 妇人呢屄图 成人免费m看片12 wwwav189com 美女小穴19p 很多男人都操过我的骚屄 ww爱色com 逼你干网 色东东电影网 永作由美作品下载 幼幼性爱美图 天海翼人体艺术图片 美脚ol穿上黑丝连裤袜从后面插入妃悠爱大槻 凌波芹天使注射快播 姐夫和小姨子激情视频 天天打飞机色色网 我奸淫了女同事 开心色播手机图片 手机yy哥歌网 舔舔美人足 草君社 看人体大白逼成人网 影音先锋姊姊 涩色涩香 风野舞子写真视频图 开心宝贝色播网 WWW82JGCOM 三级片玉女心经 回族女人毛阴15p 白嫩美女做爱全图片 有免费的操逼视频吗 五月色播博 快射电影 大奶淫秽图片 淫水诱惑26p 那夜插进妹妹的蜜穴 撸哥哥老女人逼 avdiguocom日本骑兵 陈冠希艳昭门图 图图bt资源库 自慰国语三级欧美 五月先锋媳妇 五月天淫荡图片 黄鳝自慰在线15p 久久久久日韩xxx 成人影片人人插人人搞 金发碧眼巨乳萝莉 吉泽明步绿色x站 有没有幼女系列 仲平一家的乱伦生活 wwwhuangsecome 陈冠希艳照门先锋下载 变态儿子强奸乱伦图 xiaogeshizonghe av手机av亚洲天堂妹妹网 大叔操幼女小说 刘嘉玲人体艺术大胆写真 亚洲援交 裸体做爱激情床戏 脱衣女奴 91pron自拍福利网 荒野嗯啊 校园色农夫 尤成人版电影下载 小女孩榨干 插姐姐奶 自拍bibi 日本另类a片 美女特工强奸小说 操妹妹天天操?1?7?1?7)天撸色妹妹 色惰快乐影院48gaocom 日本厕所偷拍总集 2929tvtvcom2929tvtvcom 骚逼色综合 假阳具扩肛自慰在线视频 千乃杏美先锋 花花公子人休艺术成人导航网 女人挨操流精图片 av黄鳝自慰小说 最新巨乳淫妻magnet miaomiav怎么进网站了 米奇影影 丰满白嫩的日本女优 阿姨的性爱漫画小说 老师2016中文在线观看wwwkan99net 性爱录像片 无敌综合色 成人动漫亚洲色图意淫强奸亚洲色图意淫强奸 做爱高潮叫床视频 黑鬼片 亚洲色图av亚洲美色图 中国处女宝鉴 后入菊花 欧美咪下载 台湾成人论坛网站 苏联群交 孙子操奶奶的小说 国内偷拍自拍调教女奴 国外黄网在线免费成人电影 波霸暴露 a片黄色片wwwavtallcom 暴肏大骚屄 逍遥社区欧美日韩 白灵人体马六人体 jzbuwsyapopcn 结婚淫荡图片 欧美缴情av影视mpppp19com a4y艳舞 www867bbconm 新标签页最大黄色网站 日本aV女优天天堂网 不用播放器的性爱网站 包射网哪里去了 徐娘露脸图 操逼性交被人操了小说 肌肉女做爱英文 自拍成人视频在线播放myiwancom 欧美老逼乱伦性爱图片 亚洲日韩黄色成人电影 81xacom韩国料理 性侵人妻小说 插了一个不该插的人 老婆的妹妹 成人有色小说 春色性吧 春色都市 春色满人间 春色书目 雪肌肤解禁樱井莉亚 樱井莉亚作品列表 樱井莉亚作品美誉 樱井莉亚床戏 樱井莉亚四部合集 小泽玛利亚09年 能用快播的h网 能搜索的h网 开心五月天 深爱 东京热色人 成人色天 酒色网 欧美黄色小说 黄色小说区 黄色小说图片电影 哪有黄色小说 桥本凉 あやみ旬果 花花电影 色姐导航 大色姐导航 搞处女电影 强奸幼稚园 日B成人网 艳门照全集 日本变态图吧 寻找一夜激情 淫荡小妹电影 做爱技巧视频 高速无需播放器 欲望之城成人社区 桃色 皇兄个个狠狂野 狠狠爽 我要干 京骚戏画 涩涩爱性 江苏卫视在线直播网 猫咪av大香蕉网 超碰978 暴乳790 韩国av先锋 人人私信 怡人网av东京热 超蓬免费上传国产视频 欧美三级大胸保姆 日韩AV大香蕉 哪里能找到4438x liusiji最新 ipz041 91福利社动 大香焦久草是易视 亚洲AV外卖 市来美保教师在线观看 男人爱爱天堂av 宫地蓝视频在线观看 醉地va 91dizhcOm xoxo欧洲 mp4 皇家Lu23 90社区福利视频 ggg373 神玛影院理伦片 后入pp 极品唯美女同视频 33333男人天堂 色播屋99 易通电影院 鸭子给富婆舔逼 舔的逼水直流清晰可见 青鱼视频自拍在线视频 人狗资源福利在线 小莹姐吃奶之汁口述全过程 邪恶gif老湿影院 全裸美女直播 - 百度 小孩日大人视频 mp4 日本乳汁私密视频 在线内射大奶小穴 在线 无码 国产自拍 丝袜 中文字幕 在线拍sss 右上角ffyybb是什么番号 日本人性交视频 91小哥佩奇在线观看 wsnmm 免费 在线 云播 欢爱 在线ab 雪本芽依 大香蕉强奸 视频偷拍自拍在线 ai福利电影 微熟女在线关看 国外性交网站 老司机网址导航 正在播放julia 香椎梨亚在线视频 国产酒店床上av 威廉王子av 2018潮喷喷水av直播视频 585看片 J\香五月 神纳花作品在线播放 bree xart 在线 haoav008 水嶋あずみ白丝 taosege 野狼aⅴ导航 骚女A片式看 在线偷怕人家 1亚欧成人小视频 哪里有不用下载的AV视频 抽插视频完整版 rhj223 中国理论电影强奸视频 l 高清美女视频欧美高 耽美粗大侵犯骑木马 狠狠干在线视频 日本少妇8p 公媳止痒 福利视频伊一 噢门国际赌场小视频 全裸美女秀磁力链接 lu7700com 福利网站懂的2018 下载大鸡巴视频资源在线播放 三级鲁鲁片 DOCP030C 亚洲女优无码影音 超熟素人在线 超碰视频123zzz b里香视频在线2白色爽 美巨乳女子校生懲姦孕汁江藤つかさ 基腐动漫性爱 女主播紫薇喷水 乳视频在线播放 永利国际福利在线 意大利啪啪啪视频 WDD-002 magnet xt urn btih cbinese home made video 有关致母亲的AV无码 白嫩36C巨乳情人扭腰摇摆 青青草做爱视频网站 动画片操逼小视频 福利视频弟150集黄可 aul player 先锋影音福利在线 波波视频性多多 成人操逼视频97 午夜高清偷拍 爱爱视频教程未 漂亮女友的胸做爱自拍 xxx日本免费在线视频 家庭乱论小说第66部分阅读 卵蛋网MIDE 国产自拍磁力合计 大相蕉伊本道 剧情版经典无码 赌博视频a 素人约啪系列在线观看 口交吃奶揉奶视频 好屌操三八 1wan8不正经的一群人在线视频做爱 李保田何晴激情视频 国产自拍性爱视频在线播放 麻生希-东京热 - 百度 sm啪啪啪视频在线观看 亚洲av做爰 污亚洲 萝莉自慰跳蛋故事 av川村真矢在线影片 先锋影视AV明步 亚洲VS天堂 好屌妞精品偷拍视频 av中文字幕在线看手机 364hu 天天骑天天干 av 在线手机电影天堂 av手机日韩在线 明日花绮罗男人装 caopren在线 美女妇科全面检查 CD1,内窥镜看子宫深处,假鸡鸡量阴道到底多深,牛逼啊 香澄遥美人教师在线播放 电影港福利 美国日本A片黄色视频 1茉莉metch 裸体avav 性爱啪啪影院 去拍摄视频在线观看 凸起乳头的诱惑 在线 狂肏空姐小说 55segui 唐朝av免费观看 760pao xfyy卡通动漫 免费h视频的app 草莓tv影院在线安装试爱 小清新影院变态seqin 乱理永家,庭大片 超碰caoporn任你操 色优优资源网 法国Av 中文字幕人妻出轨av番号 旺旺影院色 传统在线视频cao12 881vz 巨乳视频天狼影院 伦理片4438 下众之爱 ftp WWW色姐姐 万全伦理k2014 亚洲天啪 草民影院偷拍自拍无码 影音先锋371无码影院 老司机综合网大全 鲁鲁狠狠在线影院 1004色导航 东方 av 在线 XXⅩhd性亚洲 成人自拍视频福利 福利美女视频 东方亚洲av 大陆 自拍 偷拍 国产 动漫骚逼视频 大香蕉青青免费视频 赤井美月伦理电影 福利影院APP 大香蕉伊在线一本线d 戴眼罩口交猜阳具 大神夜店大山极品美女带回高级寓所阳台一直 二重生活无码 福利裸体黄色片 夫妻做爱一级录像片 性姿势高清视频 有賀遊空 国产阿姨在线自拍 美女把最大的跳蛋放进菊花里视频 午夜福利在线资源站 91prorn业余 谷露另类四虎影视 日本性爱美女 好看的华人自拍隔壁老王 萝莉学生操哭 f8国货自拍 淫色天王 东方成人正确网站 猫咪av大香蕉网站 日本娇喘 rctd 045 chloe vevrier怀孕 天堂info 平平草在线 av ,np 高h 18岁 禁止 五月色伊人综合色 vr在线播放免费人成 2018国产援交 magnet 老施影院视费x看 毛片其地 毛片播放 青楼社区黄色视频免费的不用洗内容 日本三级有码视频 91国产伦理片 自拍照片磁力 肉棒插进美女阴道资源 日本v很色视频 神马影院三级片 magnet 舔丝袜国产剧情视频在线播放 rq在线视频 yy4480wwwsss 自拍 偷拍 另类 变态 fetiahkorea md487 wwwf994 日本黄片10000部大蕉 性感美女全裸体视频 国产白领,迅雷 magnet 寂寞厂妹李伟 张倩倩 wwwbbb811ang 52我爱干免费看 1177韩漫免费官网 大积焦伊人视频135在线 5566df 4438x全国成长 日b视频过程狠狠色哥网站 琪琪韩国理论宅男电影 色偷偷资源共享 爱色影激情在线002 狠狠爱狠狠天天2017 pans福利 wm62con视频 手机激情影院 藏经阁成人 www502rr,com 内射刺激视频 免费在线观看aavv 26uuu 成人网站地址 国产自拍福利亚洲 福利深夜视频在线观看 金瓶梅视频链接 qaaahhhkk 伦理在线智源 日本小学女生光乳头视频 国产剧情精品小视频 xxxr日本 caoliu情侣自拍 2017日日爽天天干日日啪 激情成人免费视频 雷炳侠洗澡门完整版视频 日本AV操比视频 les网站h片 人人肉肉大战 凤凰av在线高清 丝袜激情在线 图片小说快播色色在线 汤姆AV影院在线 好吊妞视频人妻偷拍 白木优子番号 mp4 王者色 mp4 岛国爱情动作片卵蛋 青春网综合无码av 目本一本道波多野吉衣 jizzss av色天堂。五月 笑白小曼 magnet gav免费播放成人大片 你愿意让我操你吗 大胆二嫂和闺蜜3龙2凤5P大战真担心二嫂这单薄的身子骨受 久久青青草风吟鸟唱视频 nnuu22 bt欧美兹力连接 AV火山小黄人 情人自拍偷拍在线 黑鸡吧爆操白臼嫩嫩美女 先锋影音亚洲人妻制服乱伦资源 小视频啪啪来电话女的说他免费 新娘被艹黄色视频 小夫妻福利视频导航 小明同学和我妈妈后入 小清新影院午夜网站 性插成人理论 香i巷黄色片 性交福利91视频 迅雷eev 崩坏之人璃沙 芳露福利 高清无码视频大全 黄片在线韩国女主播福利直播视频 与0101电影网类似 日逼去在线视频 岛国少妇视频 狼窝成人视频 老溟影院 在线av短片 日本妞啪啪高清 古装福利电影院 k频道新网 美祢藤コウ在线 东方影库正确地址域名 美乳少妇动态视频 人人爱人人色 北川瞳 简介 做爱自拍偷拍27P 国产精品大片182cm车模女仆装娇艳欲滴粉木 av喝尿的片哪有 老师与学生三级视频 边打电话边给我口交,我抠她骚逼到潮 免费毛片tube sex cosplay不知火舞 吸奶子头操逼视频资源 WWW,57ppp,com 啪啪影院自拍偷拍 荡女报恩亚洲视频 神马电影自拍偷拍 天天碰视频免费视频老影院 乱伦小说专区 捆绑Sm magnet 日本人六九xxx视频 avXXX日本 ab俺去射 日韩美女网 在线刺激导航 久久影院-星魔网百度 abp561c 国产成人规频在线 2018欧美在线理论重口味 538PORCOm www,333zk 长泽雅美无码在线观看 1开心影院贝贝邪恶 色导航第四色 草莓午夜免费福利小电影 波多野结衣海边群交 草莓国产午夜视频色琪琪电影 韩国学生俊男靓女酒店骑马摇摆抽插 国内自拍直播网 被侵犯的母亲优优色影院 变态另类在线直播欧美 国产玩呦 操逼逼吃奶视频 苍井空A影线费观看百度 韩国AV 下载 强奸乱伦-第8页 色综合2 杉浦友集磁力 mp4 秘密搜查官qvod 苍老师 ftp 一本道 丝袜 天海翼 色悠悠久草 做爱性交视频福利 那种漫画小说你懂的 中文字幕rct-470在线 love大桥未久爱情电影 成人在线免m免费观看 se66 亚洲自拍 色综合 协和手机在线播放 成人aⅤ影视 成人动漫在线播放 QZ75 西瓜影音 sspd强奸系列 包臀 无码 成宫琉璃视频 ftp wuxiaorui rion视频区在线 成人福利网址 你懆操 番号onsk 高潮视频西瓜影音先锋 苍井空xun迅雷下载 20我想看脱光腚了看操逼操逼的女人 午夜福利插b剧场 8x8x视频福利 高清大尺度丝袜老师 www銆倆ih8 wwwaaa,678kblz 偷情丝袜 刘瑞琪空姐门 貂蝉肉棍棒小说 少妇抠逼图片24p下一页 深夜福利无码视频在线 女神 主播自慰 在线 黄-色性交免费视频播放 小牛棚视频导航 黄色做爱小影院 海贼王黄版视频在线观看 www,87bbee,com 大桥未久在线资源 rounailujibanannv 午夜人与兽性交福利在线7O 变态搞基网站 qiuxia手机影院同性恋 时间停止器在线观看吉吉影音 快播成人妻视频 网友自拍笫3页 台湾三级片视频 久草视频16 小早川怜子伦理 湿情影院 青青在线葡京视频 小骚娃 金瓶梅电影偷性全视频 和小女生车震自拍视频 破呙福利 国产丝袜高跟鞋无码下载 萝莉 无内 小视频 毕业生里番 艹爽啪啪啪视频 不知火舞和三个男儿公园全彩 藏花阁在线直播平台 厕所偷拍在线观看 不用下载就可以马上看超碰免费视频洗澡BB 国内自拍新世界的大门巨乳列传 国产自拍第四页 操大奶丁字裤美女做爱故事 国产自拍直播 被侵犯的女子校生不知世事的大小姐 裸体搞p视频 ttllcc 大鸡巴操b视频 少女哥哥我想看那个床震作文 大香蕉在线福利观看 超碰成人内射0视频 3agirls视频 外国乱乱网站 97超碰人妻在线视频 国产真实干妈影院 国产a片直播 赶尸艳谈在线播放 邪恶丝袜美女嗯嗯啊啊 亚洲日本大鸡巴视频 天天拍干天天射 闪电奇迹裸体 华裔女神电击视频在线观看 性交视频有哪些网站 成人快播伦理欧美图片 不一样的操淫逼视频 757h 自拍 最新 另类 nb544 1huang&#039;se动漫 正在播放爱泽花梨教室 好看的高清无码 宇都宫紫苑 红秀 青青草成人快播免费视频 大鸡巴逼逼逼片 欧美性爱大片黄 午夜影院码 保拉的诱感在线视频 好色妻降临视频在线 正在播放年轻的女邻居 蔡番号 烧熟女吧 sewujiyazhouyingyuan1 a4yy万利达首播影院官网 午夜福利视频自拍偷拍 日本强奸绑架 下载 华人自拍专区 上海罗城厕所种子 日本免费AV camporn国产自拍 激情邪恶大香蕉 印度三部曲百度网盘 jiqing伍月天小说 龙在少林国语高清优酷 美女被虐中出视频 乱伦交配视频 伦理片爆乳美女写真 鲁尔山在线你慬的大片 美国十次la视频导 青青草大猫咪 骑车的女人韩国 极品美女磁力链 强插幼洞视频 极品黑丝性感女友宾馆中慢慢调情玩弄 鸡巴用力插入小穴 精彩剧情演绎 情趣黑丝高跟美骚妇装看病勾引药店医师 病床上激烈ML爆操 无套 激情视频图片小说在线播放 吉本多香美三级 青青草熟妇视频 江疏影不雅 视频地址 Sq222 壮汉番号 日本三级喷奶视频 老板润滑嗯嗯啊 琪姐在线视频 亲亲视频在线首页 露奶子的黄色小视频 婷婷大香蕉伊人线 性激情影院 日本三级456重口味 换妻书 红楼影/院 国产自拍叫床 仔仔网今天怎么打不开 4438x全国最大成人网址 富二代大屌哥和学院女神预览视频 就快播 私人玩物玩具酱在线视频 waifu哔咔官网 正在播放 成都极品女神 汤姆影视在线视频观看 风吟鸟唱摄影师嫩模 亚州视频狠狠插大香蕉 女王俱乐部免费视频 好日日中文字幕 啪啪啪视频AV 黄色网络视频在线播放 嘿嘿lovo大香蕉在线视频 朋友胸大漂亮的E奶女友勾搭了一个多月终于在朋友出差时出租屋里操了她看她害 黄片下截连接 黑田万结花无码magnet 欧美女主播 狠狠日色格格影院 小姐和顾客链接 magnet 泰国伦理院在线观看 578sao 在线丝袜影视 日本高清无码视频自拍 偷拍商场女厕所全景磁力链接 magnet 户外免费毛片 每日更新在线观看偷拍 longteng xiaishuo 丝袜av福利 96av在线草莓免费视频 30路熟女快播在线播放 日本娇喘黄片 黄色色播站 秋霞一级特黄高清无码影院播放 ibw619 老师邪恶影院a 色妇影院 www,4438x7 14438x最新网站有哪些 sesihu 1电影天堂EEUSS 年轻小夫妻性爱视频 91大神国产自拍 KEDQUXxX 巨乳家政妇无码 magnet 莫西哥性爱 里番轮奸俱乐部 成人免费视频 A片 被玩弄的50岁熟妻 歴代豊滿巨乳大人気女優 东莞21,55实拍视频 乱情影视的兄弟网站 xxxvdynom 快播色色影院 国产在线主播 91小青蛙红杏出墙3p 男主叫佐藤女主叫千里的里番 秘杜mmsscom 南京熟妇做爱视频 韩国主播梦 波 福利 耐克真黄色录像一级片 免费视频一级AV 摸I奶乳头视频 国模白鹭 色伊人好看动漫色 白丝福利露裆部 免賛福利影院 福利在线神马51 国产无码偷拍小视频 国产出租屋偷拍链接 延禧宫电视剧全集西瓜 小视频黄网 丁香五月欧洲视频播放 正在播放国产熟女 莞式按摩服务伦理 干逼网址免费在线观看 撸网在线 亚洲野狼群AV 手机五月丁人网 a阿vvt天堂2014在线 东方影视 永久在线 东方四虎av伊人 三级纶理片 肛交乳交奶推视频 恋足足交视频92 四虎 小穴 韩日AV 好吊曰妞欧美 在线高清高速影院 日性感少妇 快播巨乳少女在线播放 李湘人体忆书照 亚洲视频夜必橹 美女伦理图片大全 欧美性交私处图 若兰神马影院 农村操逼网 黄色三级小说 影音先锋熟女淫乱 女明星无优人体 金发女郎爽射无码 就去干bb 欧美人体五月天 男人日女人下面的视频 欧洲下体艺术照 色中色色中色白毛 爆操少妇在线 www003secom 人体私拍海天翼 傅贞怡艺术照 超级淫荡的嫂子勾引我 继母裸体 巨乳shufu 美女强奸视频快播 佐佐木心音影音先锋 988性爱网 台湾色老中文网 快播电影性爱女人 办公室爱爱自拍视频 欧美兽交肛交群交口交双插 美女做僾动感图片 姐也要内射图片 91热色色色 色岛撸水妃 口暴10p 六月婷婷激情网 影音先锋亚洲尻屄 干40多岁的女人 明星合成人体艺术 操弄美女 三p美少妇3p 黄榕胆大人体艺术