The std. uni module provides an implementation of fundamental Unicode algorithms and data structures. This doesn't include UTF encoding and decoding primitives, see std.utf.decode and std.utf.encode in std.utf for this functionality.
All primitives listed operate on Unicode characters and sets of characters. For functions which operate on ASCII characters and ignore Unicode characters, see std.ascii. For definitions of Unicode character, code point and other terms used throughout this module see the terminology section below.
The focus of this module is the core needs of developing Unicode-aware applications. To that effect it provides the following optimized primitives:
It's recognized that an application may need further enhancements and extensions, such as less commonly known algorithms, or tailoring existing ones for region specific needs. To help users with building any extra functionality beyond the core primitives, the module provides:
import std.uni; void main() { // initialize code point sets using script/block or property name // now 'set' contains code points from both scripts. auto set = unicode("Cyrillic") | unicode("Armenian"); // same thing but simpler and checked at compile-time auto ascii = unicode.ASCII; auto currency = unicode.Currency_Symbol; // easy set ops auto a = set & ascii; assert(a.empty); // as it has no intersection with ascii a = set | ascii; auto b = currency - a; // subtract all ASCII, Cyrillic and Armenian // some properties of code point sets assert(b.length > 45); // 46 items in Unicode 6.1, even more in 6.2 // testing presence of a code point in a set // is just fine, it is O(logN) assert(!b['$']); assert(!b['\u058F']); // Armenian dram sign assert(b['¥']); // building fast lookup tables, these guarantee O(1) complexity // 1-level Trie lookup table essentially a huge bit-set ~262Kb auto oneTrie = toTrie!1(b); // 2-level far more compact but typically slightly slower auto twoTrie = toTrie!2(b); // 3-level even smaller, and a bit slower yet auto threeTrie = toTrie!3(b); assert(oneTrie['£']); assert(twoTrie['£']); assert(threeTrie['£']); // build the trie with the most sensible trie level // and bind it as a functor auto cyrilicOrArmenian = toDelegate(set); auto balance = find!(cyrilicOrArmenian)("Hello ընկեր!"); assert(balance == "ընկեր!"); // compatible with bool delegate(dchar) bool delegate(dchar) bindIt = cyrilicOrArmenian; // Normalization string s = "Plain ascii (and not only), is always normalized!"; assert(s is normalize(s));// is the same string string nonS = "A\u0308ffin"; // A ligature auto nS = normalize(nonS); // to NFC, the W3C endorsed standard assert(nS == "Äffin"); assert(nS != nonS); string composed = "Äffin"; assert(normalize!NFD(composed) == "A\u0308ffin"); // to NFKD, compatibility decomposition useful for fuzzy matching/searching assert(normalize!NFKD("2¹⁰") == "210"); }
The following is a list of important Unicode notions and definitions. Any conventions used specifically in this module alone are marked as such. The descriptions are based on the formal definition as found in ($WEB http://www.unicode.org/versions/Unicode6.2.0/ch03.pdf, chapter three of The Unicode Standard Core Specification.)
This module defines a number of primitives that work with graphemes: Grapheme , decodeGrapheme and graphemeStride . All of them are using extended grapheme boundaries as defined in the aforementioned standard annex.
The concepts of canonical equivalent or compatibility equivalent characters in the Unicode Standard make it necessary to have a full, formal definition of equivalence for Unicode strings. String equivalence is determined by a process called normalization, whereby strings are converted into forms which are compared directly for identity. This is the primary goal of the normalization process, see the function normalize to convert into any of the four defined forms.
A very important attribute of the Unicode Normalization Forms is that they must remain stable between versions of the Unicode Standard. A Unicode string normalized to a particular Unicode Normalization Form in one version of the standard is guaranteed to remain in that Normalization Form for implementations of future versions of the standard.
The Unicode Standard specifies four normalization forms. Informally, two of these forms are defined by maximal decomposition of equivalent sequences, and two of these forms are defined by maximal composition of equivalent sequences.
The choice of the normalization form depends on the particular use case. NFC is the best form for general text, since it's more compatible with strings converted from legacy encodings. NFKC is the preferred form for identifiers, especially where there are security concerns. NFD and NFKD are the most useful for internal processing.
The Unicode standard describes a set of algorithms that depend on having the ability to quickly look up various properties of a code point. Given the the codespace of about 1 million code points, it is not a trivial task to provide a space-efficient solution for the multitude of properties.
Common approaches such as hash-tables or binary search over sorted code point intervals (as in InversionList ) are insufficient. Hash-tables have enormous memory footprint and binary search over intervals is not fast enough for some heavy-duty algorithms.
The recommended solution (see Unicode Implementation Guidelines) is using multi-stage tables that are an implementation of the Trie data structure with integer keys and a fixed number of stages. For the remainder of the section this will be called a fixed trie. The following describes a particular implementation that is aimed for the speed of access at the expense of ideal size savings.
Taking a 2-level Trie as an example the principle of operation is as follows. Split the number of bits in a key (code point, 21 bits) into 2 components (e.g. 15 and 8). The first is the number of bits in the index of the trie and the other is number of bits in each page of the trie. The layout of the trie is then an array of size 2^^bits-of-index followed an array of memory chunks of size 2^^bits-of-page/bits-per-element.
The number of pages is variable (but not less then 1)
unlike the number of entries in the index. The slots of the index
all have to contain a number of a page that is present. The lookup is then
just a couple of operations - slice the upper bits,
lookup an index for these, take a page at this index and use
the lower bits as an offset within this page.
Assuming that pages are laid out consequently
in one array at pages, the pseudo-code is:
auto elemsPerPage = (2 ^^ bits_per_page) / Value.sizeOfInBits;
pages[index[n >> bits_per_page]][n & (elemsPerPage - 1)];
Where if elemsPerPage is a power of 2 the whole process is a handful of simple instructions and 2 array reads. Subsequent levels of the trie are introduced by recursing on this notion - the index array is treated as values. The number of bits in index is then again split into 2 parts, with pages over 'current-index' and the new 'upper-index'.
For completeness a level 1 trie is simply an array. The current implementation takes advantage of bit-packing values when the range is known to be limited in advance (such as bool). See also BitPacked for enforcing it manually. The major size advantage however comes from the fact that multiple identical pages on every level are merged by construction.
The process of constructing a trie is more involved and is hidden from the user in a form of the convenience functions codepointTrie , codepointSetTrie and the even more convenient toTrie . In general a set or built-in AA with dchar type can be turned into a trie. The trie object in this module is read-only (immutable); it's effectively frozen after construction.
This is a full list of Unicode properties accessible through unicode with specific helpers per category nested within. Consult the CLDR utility when in doubt about the contents of a particular set.
General category sets listed below are only accessible with the unicode shorthand accessor.
Abb. | Long form | Abb. | Long form | Abb. | Long form |
---|---|---|---|---|---|
L | Letter | Cn | Unassigned | Po | Other_Punctuation |
Ll | Lowercase_Letter | Co | Private_Use | Ps | Open_Punctuation |
Lm | Modifier_Letter | Cs | Surrogate | S | Symbol |
Lo | Other_Letter | N | Number | Sc | Currency_Symbol |
Lt | Titlecase_Letter | Nd | Decimal_Number | Sk | Modifier_Symbol |
Lu | Uppercase_Letter | Nl | Letter_Number | Sm | Math_Symbol |
M | Mark | No | Other_Number | So | Other_Symbol |
Mc | Spacing_Mark | P | Punctuation | Z | Separator |
Me | Enclosing_Mark | Pc | Connector_Punctuation | Zl | Line_Separator |
Mn | Nonspacing_Mark | Pd | Dash_Punctuation | Zp | Paragraph_Separator |
C | Other | Pe | Close_Punctuation | Zs | Space_Separator |
Cc | Control | Pf | Final_Punctuation | - | Any |
Cf | Format | Pi | Initial_Punctuation | - | ASCII |
Sets for other commonly useful properties that are accessible with unicode :
Name | Name | Name |
---|---|---|
Alphabetic | Ideographic | Other_Uppercase |
ASCII_Hex_Digit | IDS_Binary_Operator | Pattern_Syntax |
Bidi_Control | ID_Start | Pattern_White_Space |
Cased | IDS_Trinary_Operator | Quotation_Mark |
Case_Ignorable | Join_Control | Radical |
Dash | Logical_Order_Exception | Soft_Dotted |
Default_Ignorable_Code_Point | Lowercase | STerm |
Deprecated | Math | Terminal_Punctuation |
Diacritic | Noncharacter_Code_Point | Unified_Ideograph |
Extender | Other_Alphabetic | Uppercase |
Grapheme_Base | Other_Default_Ignorable_Code_Point | Variation_Selector |
Grapheme_Extend | Other_Grapheme_Extend | White_Space |
Grapheme_Link | Other_ID_Continue | XID_Continue |
Hex_Digit | Other_ID_Start | XID_Start |
Hyphen | Other_Lowercase | |
ID_Continue | Other_Math |
Bellow is the table with block names accepted by unicode.block . Note that the shorthand version unicode requires "In" to be prepended to the names of blocks so as to disambiguate scripts and blocks.
Aegean Numbers | Ethiopic Extended | Mongolian |
Alchemical Symbols | Ethiopic Extended-A | Musical Symbols |
Alphabetic Presentation Forms | Ethiopic Supplement | Myanmar |
Ancient Greek Musical Notation | General Punctuation | Myanmar Extended-A |
Ancient Greek Numbers | Geometric Shapes | New Tai Lue |
Ancient Symbols | Georgian | NKo |
Arabic | Georgian Supplement | Number Forms |
Arabic Extended-A | Glagolitic | Ogham |
Arabic Mathematical Alphabetic Symbols | Gothic | Ol Chiki |
Arabic Presentation Forms-A | Greek and Coptic | Old Italic |
Arabic Presentation Forms-B | Greek Extended | Old Persian |
Arabic Supplement | Gujarati | Old South Arabian |
Armenian | Gurmukhi | Old Turkic |
Arrows | Halfwidth and Fullwidth Forms | Optical Character Recognition |
Avestan | Hangul Compatibility Jamo | Oriya |
Balinese | Hangul Jamo | Osmanya |
Bamum | Hangul Jamo Extended-A | Phags-pa |
Bamum Supplement | Hangul Jamo Extended-B | Phaistos Disc |
Basic Latin | Hangul Syllables | Phoenician |
Batak | Hanunoo | Phonetic Extensions |
Bengali | Hebrew | Phonetic Extensions Supplement |
Block Elements | High Private Use Surrogates | Playing Cards |
Bopomofo | High Surrogates | Private Use Area |
Bopomofo Extended | Hiragana | Rejang |
Box Drawing | Ideographic Description Characters | Rumi Numeral Symbols |
Brahmi | Imperial Aramaic | Runic |
Braille Patterns | Inscriptional Pahlavi | Samaritan |
Buginese | Inscriptional Parthian | Saurashtra |
Buhid | IPA Extensions | Sharada |
Byzantine Musical Symbols | Javanese | Shavian |
Carian | Kaithi | Sinhala |
Chakma | Kana Supplement | Small Form Variants |
Cham | Kanbun | Sora Sompeng |
Cherokee | Kangxi Radicals | Spacing Modifier Letters |
CJK Compatibility | Kannada | Specials |
CJK Compatibility Forms | Katakana | Sundanese |
CJK Compatibility Ideographs | Katakana Phonetic Extensions | Sundanese Supplement |
CJK Compatibility Ideographs Supplement | Kayah Li | Superscripts and Subscripts |
CJK Radicals Supplement | Kharoshthi | Supplemental Arrows-A |
CJK Strokes | Khmer | Supplemental Arrows-B |
CJK Symbols and Punctuation | Khmer Symbols | Supplemental Mathematical Operators |
CJK Unified Ideographs | Lao | Supplemental Punctuation |
CJK Unified Ideographs Extension A | Latin-1 Supplement | Supplementary Private Use Area-A |
CJK Unified Ideographs Extension B | Latin Extended-A | Supplementary Private Use Area-B |
CJK Unified Ideographs Extension C | Latin Extended Additional | Syloti Nagri |
CJK Unified Ideographs Extension D | Latin Extended-B | Syriac |
Combining Diacritical Marks | Latin Extended-C | Tagalog |
Combining Diacritical Marks for Symbols | Latin Extended-D | Tagbanwa |
Combining Diacritical Marks Supplement | Lepcha | Tags |
Combining Half Marks | Letterlike Symbols | Tai Le |
Common Indic Number Forms | Limbu | Tai Tham |
Control Pictures | Linear B Ideograms | Tai Viet |
Coptic | Linear B Syllabary | Tai Xuan Jing Symbols |
Counting Rod Numerals | Lisu | Takri |
Cuneiform | Low Surrogates | Tamil |
Cuneiform Numbers and Punctuation | Lycian | Telugu |
Currency Symbols | Lydian | Thaana |
Cypriot Syllabary | Mahjong Tiles | Thai |
Cyrillic | Malayalam | Tibetan |
Cyrillic Extended-A | Mandaic | Tifinagh |
Cyrillic Extended-B | Mathematical Alphanumeric Symbols | Transport And Map Symbols |
Cyrillic Supplement | Mathematical Operators | Ugaritic |
Deseret | Meetei Mayek | Unified Canadian Aboriginal Syllabics |
Devanagari | Meetei Mayek Extensions | Unified Canadian Aboriginal Syllabics Extended |
Devanagari Extended | Meroitic Cursive | Vai |
Dingbats | Meroitic Hieroglyphs | Variation Selectors |
Domino Tiles | Miao | Variation Selectors Supplement |
Egyptian Hieroglyphs | Miscellaneous Mathematical Symbols-A | Vedic Extensions |
Emoticons | Miscellaneous Mathematical Symbols-B | Vertical Forms |
Enclosed Alphanumerics | Miscellaneous Symbols | Yijing Hexagram Symbols |
Enclosed Alphanumeric Supplement | Miscellaneous Symbols and Arrows | Yi Radicals |
Enclosed CJK Letters and Months | Miscellaneous Symbols And Pictographs | Yi Syllables |
Enclosed Ideographic Supplement | Miscellaneous Technical | |
Ethiopic | Modifier Tone Letters |
Bellow is the table with script names accepted by unicode.script and by the shorthand version unicode :
Arabic | Hanunoo | Old_Italic |
Armenian | Hebrew | Old_Persian |
Avestan | Hiragana | Old_South_Arabian |
Balinese | Imperial_Aramaic | Old_Turkic |
Bamum | Inherited | Oriya |
Batak | Inscriptional_Pahlavi | Osmanya |
Bengali | Inscriptional_Parthian | Phags_Pa |
Bopomofo | Javanese | Phoenician |
Brahmi | Kaithi | Rejang |
Braille | Kannada | Runic |
Buginese | Katakana | Samaritan |
Buhid | Kayah_Li | Saurashtra |
Canadian_Aboriginal | Kharoshthi | Sharada |
Carian | Khmer | Shavian |
Chakma | Lao | Sinhala |
Cham | Latin | Sora_Sompeng |
Cherokee | Lepcha | Sundanese |
Common | Limbu | Syloti_Nagri |
Coptic | Linear_B | Syriac |
Cuneiform | Lisu | Tagalog |
Cypriot | Lycian | Tagbanwa |
Cyrillic | Lydian | Tai_Le |
Deseret | Malayalam | Tai_Tham |
Devanagari | Mandaic | Tai_Viet |
Egyptian_Hieroglyphs | Meetei_Mayek | Takri |
Ethiopic | Meroitic_Cursive | Tamil |
Georgian | Meroitic_Hieroglyphs | Telugu |
Glagolitic | Miao | Thaana |
Gothic | Mongolian | Thai |
Greek | Myanmar | Tibetan |
Gujarati | New_Tai_Lue | Tifinagh |
Gurmukhi | Nko | Ugaritic |
Han | Ogham | Vai |
Hangul | Ol_Chiki | Yi |
Bellow is the table of names accepted by unicode.hangulSyllableType .
Abb. | Long form |
---|---|
L | Leading_Jamo |
LV | LV_Syllable |
LVT | LVT_Syllable |
T | Trailing_Jamo |
V | Vowel_Jamo |
Constant code point (0x2028) - line separator.
Constant code point (0x2029) - paragraph separator.
Tests if T is some kind a set of code points. Intended for template constraints.
Tests if T is a pair of integers that implicitly convert to V. The following code must compile for any pair T:
(T x){ V a = x[0]; V b = x[1];}The following must not compile:
(T x){ V c = x[2];}
The recommended default type for set of code points. For details, see the current implementation: InversionList .
The recommended type of std.typecons.Tuple to represent [a, b) intervals of code points. As used in InversionList . Any interval type should pass isIntegralPair trait.
InversionList is a set of code points represented as an array of open-right [a, b) intervals (see CodepointInterval above). The name comes from the way the representation reads left to right. For instance a set of all values [10, 50), [80, 90), plus a singular value 60 looks like this:
10, 50, 60, 61, 80, 90
The way to read this is: start with negative meaning that all numbers smaller then the next one are not present in this set (and positive - the contrary). Then switch positive/negative after each number passed from left to right.
This way negative spans until 10, then positive until 50, then negative until 60, then positive until 61, and so on. As seen this provides a space-efficient storage of highly redundant data that comes in long runs. A description which Unicode character properties fit nicely. The technique itself could be seen as a variation on RLE encoding .
Sets are value types (just like int is) thus they are never aliased.
auto a = CodepointSet('a', 'z'+1); auto b = CodepointSet('A', 'Z'+1); auto c = a; a = a | b; assert(a == CodepointSet('A', 'Z'+1, 'a', 'z'+1)); assert(a != c);
See also unicode for simpler construction of sets from predefined ones.
Memory usage is 6 bytes per each contiguous interval in a set. The value semantics are achieved by using the ($WEB http://en.wikipedia.org/wiki/Copy-on-write, COW) technique and thus it's not safe to cast this type to shared.
It's not recommended to rely on the template parameters or the exact type of a current code point set in std.uni. The type and parameters may change when the standard allocators design is finalized. Use isCodepointSet with templates or just stick with the default alias CodepointSet throughout the whole code base.
Construct from another code point set of any type.
Construct a set from a range of sorted code point intervals.
Construct a set from plain values of sorted code point intervals.
auto set = CodepointSet('a', 'z'+1, 'а', 'я'+1); foreach(v; 'a'..'z'+1) assert(set[v]); // Cyrillic lowercase interval foreach(v; 'а'..'я'+1) assert(set[v]);
Get range that spans all of the code point intervals in this InversionList .
import std.algorithm, std.typecons; auto set = CodepointSet('A', 'D'+1, 'a', 'd'+1); set.byInterval.equal([tuple('A', 'E'), tuple('a', 'e')]);
Tests the presence of code point val in this set.
auto gothic = unicode.Gothic; // Gothic letter ahsa assert(gothic['\U00010330']); // no ascii in Gothic obviously assert(!gothic['$']);
Number of code points in this set
Sets support natural syntax for set algebra, namely:
Operator | Math notation | Description |
---|---|---|
& | a ∩ b | intersection |
| | a ∪ b | union |
- | a ∖ b | subtraction |
~ | a ~ b | symmetric set difference i.e. (a ∪ b) \ (a ∩ b) |
auto lower = unicode.LowerCase; auto upper = unicode.UpperCase; auto ascii = unicode.ASCII; assert((lower & upper).empty); // no intersection auto lowerASCII = lower & ascii; assert(lowerASCII.byCodepoint.equal(iota('a', 'z'+1))); // throw away all of the lowercase ASCII assert((ascii - lower).length == 128 - 26); auto onlyOneOf = lower ~ ascii; assert(!onlyOneOf['Δ']); // not ASCII and not lowercase assert(onlyOneOf['$']); // ASCII and not lowercase assert(!onlyOneOf['a']); // ASCII and lowercase assert(onlyOneOf['я']); // not ASCII but lowercase // throw away all cased letters from ASCII auto noLetters = ascii - (lower | upper); assert(noLetters.length == 128 - 26*2);
The 'op=' versions of the above overloaded operators.
Tests the presence of codepoint ch in this set, the same as opIndex .
Obtains a set that is the inversion of this set. See also inverted .
A range that spans each code point in this set.
import std.algorithm; auto set = unicode.ASCII; set.byCodepoint.equal(iota(0, 0x80));
Obtain textual representation of this set in from of open-right intervals and feed it to sink.
Used by various standard formatting facilities such as std.format.formattedWrite, std.stdio.write, std.stdio.writef, std.conv.to and others.
import std.conv; assert(unicode.ASCII.to!string == "[0..128$(RPAREN)");
Add an interval [a, b) to this set.
CodepointSet someSet; someSet.add('0', '5').add('A','Z'+1); someSet.add('5', '9'+1); assert(someSet['0']); assert(someSet['5']); assert(someSet['9']); assert(someSet['Z']);
Obtains a set that is the inversion of this set.
See the '!' opUnary for the same but using operators.
set = unicode.ASCII; // union with the inverse gets all of the code points in the Unicode assert((set | set.inverted).length == 0x110000); // no intersection with the inverse assert((set & set.inverted).empty);
Generates string with D source code of unary function with name of funcName taking a single dchar argument. If funcName is empty the code is adjusted to be a lambda function.
The function generated tests if the code point passed belongs to this set or not. The result is to be used with string mixin. The intended usage area is aggressive optimization via meta programming in parser generators and the like.
import std.stdio; // construct set directly from [a, b$(RPAREN) intervals auto set = CodepointSet(10, 12, 45, 65, 100, 200); writeln(set); writeln(set.toSourceCode("func"));
bool func(dchar ch) { if(ch < 45) { if(ch == 10 || ch == 11) return true; return false; } else if (ch < 65) return true; else { if(ch < 100) return false; if(ch < 200) return true; return false; } }
True if this set doesn't contain any code points.
CodepointSet emptySet; assert(emptySet.length == 0); assert(emptySet.empty);
A shorthand for creating a custom multi-level fixed Trie from a CodepointSet. sizes are numbers of bits per level, with the most significant bits used first.
{ import std.stdio; auto set = unicode("Number"); auto trie = codepointSetTrie!(8, 5, 8)(set); writeln("Input code points to test:"); foreach(line; stdin.byLine) { int count=0; foreach(dchar ch; line) if(trie[ch])// is number count++; writefln("Contains %d number code points.", count); } }
Type of Trie generated by codepointSetTrie function.
A slightly more general tool for building fixed Trie for the Unicode data.
Specifically unlike codepointSetTrie it's allows creating mappings of dchar to an arbitrary type T.
// pick characters from the Greek script auto set = unicode.Greek; // a user-defined property (or an expensive function) // that we want to look up static uint luckFactor(dchar ch) { // here we consider a character lucky // if its code point has a lot of identical hex-digits // e.g. arabic letter DDAL (\u0688) has a "luck factor" of 2 ubyte[6] nibbles; // 6 4-bit chunks of code point uint value = ch; foreach(i; 0..6) { nibbles[i] = value & 0xF; value >>= 4; } uint luck; foreach(n; nibbles) luck = cast(uint)max(luck, count(nibbles[], n)); return luck; } // only unsigned built-ins are supported at the moment alias LuckFactor = BitPacked!(uint, 3); // create a temporary associative array (AA) LuckFactor[dchar] map; foreach(ch; set.byCodepoint) map[ch] = luckFactor(ch); // bits per stage are chosen randomly, fell free to optimize auto trie = codepointTrie!(LuckFactor, 8, 5, 8)(map); // from now on the AA is not needed foreach(ch; set.byCodepoint) assert(trie[ch] == luckFactor(ch)); // verify // CJK is not Greek, thus it has the default value assert(trie['\u4444'] == 0); // and here is a couple of quite lucky Greek characters: // Greek small letter epsilon with dasia assert(trie['\u1F11'] == 3); // Ancient Greek metretes sign assert(trie['\U00010181'] == 3);
Type of Trie as generated by codepointTrie function.
Convenience function to construct optimal configurations for packed Trie from any set of code points.
The parameter level indicates the number of trie levels to use,
allowed values are: 1, 2, 3 or 4. Levels represent different trade-offs
speed-size wise.
Level 1 is fastest and the most memory hungry (a bit array).
Level 4 is the slowest and has the smallest footprint.
Builds a Trie with typically optimal speed-size trade-off and wraps it into a delegate of the following type: bool delegate(dchar ch).
Effectively this creates a 'tester' lambda suitable for algorithms like std.algorithm.find that take unary predicates.
A single entry point to lookup Unicode code point sets by name or alias of a block, script or general category.
It uses well defined standard rules of property name lookup. This includes fuzzy matching of names, so that 'White_Space', 'white-SpAce' and 'whitespace' are all considered equal and yield the same set of white space characters.
Performs the lookup of set of code points with compile-time correctness checking. This short-cut version combines 3 searches: across blocks, scripts, and common binary properties.
Note that since scripts and blocks overlap the
usual trick to disambiguate is used - to get a block use
unicode.InBlockName, to search a script
use unicode.ScriptName.
See also block
, script
and (not included in this search) hangulSyllableType
.
auto ascii = unicode.ASCII; assert(ascii['A']); assert(ascii['~']); assert(!ascii['\u00e0']); // matching is case-insensitive assert(ascii == unicode.ascII); assert(!ascii['à']); // underscores, '-' and whitespace in names are ignored too auto latin = unicode.in_latin1_Supplement; assert(latin['à']); assert(!latin['$']); // BTW Latin 1 Supplement is a block, hence "In" prefix assert(latin == unicode("In Latin 1 Supplement")); import std.exception; // run-time look up throws if no such set is found assert(collectException(unicode("InCyrilliac")));
The same lookup across blocks, scripts, or binary properties, but performed at run-time. This version is provided for cases where name is not known beforehand; otherwise compile-time checked opDispatch is typically a better choice.
See the table of properties for available sets.
Narrows down the search for sets of code points to all Unicode blocks.
See also table of properties.
// use .block for explicitness assert(unicode.block.Greek_and_Coptic == unicode.InGreek_and_Coptic);
Narrows down the search for sets of code points to all Unicode scripts.
See the table of properties for available sets.
auto arabicScript = unicode.script.arabic; auto arabicBlock = unicode.block.arabic; // there is an intersection between script and block assert(arabicBlock['']); assert(arabicScript['']); // but they are different assert(arabicBlock != arabicScript); assert(arabicBlock == unicode.inArabic); assert(arabicScript == unicode.arabic);
Fetch a set of code points that have the given hangul syllable type.
Other non-binary properties (once supported) follow the same
notation - unicode.propertyName.propertyValue for compile-time
checked access and unicode.propertyName(propertyValue)
for run-time checked one.
See the table of properties for available
sets.
// L here is syllable type not Letter as in unicode.L short-cut auto leadingVowel = unicode.hangulSyllableType("L"); // check that some leading vowels are present foreach(vowel; '\u1110'..'\u115F') assert(leadingVowel[vowel]); assert(leadingVowel == unicode.hangulSyllableType.L);
Returns the length of grapheme cluster starting at index. Both the resulting length and the index are measured in code units.
// ASCII as usual is 1 code unit, 1 code point etc. assert(graphemeStride(" ", 1) == 1); // A + combing ring above string city = "A\u030Arhus"; size_t first = graphemeStride(city, 0); assert(first == 3); //\u030A has 2 UTF-8 code units assert(city[0..first] == "A\u030A"); assert(city[first..$] == "rhus");
Reads one full grapheme cluster from an input range of dchar inp.
For examples see the Grapheme below.
A structure designed to effectively pack characters of a grapheme cluster.
Grapheme has value semantics so 2 copies of a Grapheme always refer to distinct objects. In most actual scenarios a Grapheme fits on the stack and avoids memory allocation overhead for all but quite long clusters.
import std.algorithm; string bold = "ku\u0308hn"; // note that decodeGrapheme takes parameter by ref // slicing a grapheme yields a range of dchar assert(decodeGrapheme(bold)[].equal("k")); // the next grapheme is 2 characters long auto wideOne = decodeGrapheme(bold); assert(wideOne.length == 2); assert(wideOne[].equal("u\u0308")); // the usual range manipulation is possible assert(wideOne[].filter!isMark.equal("\u0308"));
See also decodeGrapheme , graphemeStride .
Gets a code point at the given index in this cluster.
Writes a code point ch at given index in this cluster.
auto g = Grapheme("A\u0302"); assert(g[0] == 'A'); assert(g.valid); g[1] = '~'; // ASCII tilda is not a combining mark assert(g[1] == '~'); assert(!g.valid);
Random-access range over Grapheme's characters.
Grapheme cluster length in code points.
Append character ch to this grapheme.
auto g = Grapheme("A"); assert(g.valid); g ~= '\u0301'; assert(g[].equal("A\u0301")); assert(g.valid); g ~= "B"; // not a valid grapheme cluster anymore assert(!g.valid); // still could be useful though assert(g[].equal("A\u0301B"));See also Grapheme.valid below.
Append all characters from the input range inp to this Grapheme.
True if this object contains valid extended grapheme cluster. Decoding primitives of this module always return a valid Grapheme.
Appending to and direct manipulation of grapheme's characters may render it no longer valid. Certain applications may chose to use Grapheme as a "small string" of any code points and ignore this property entirely.
Does basic case-insensitive comparison of strings str1 and str2. This function uses simpler comparison rule thus achieving better performance then icmp . However keep in mind the warning below.
assert(sicmp("Август", "авгусТ") == 0); // Greek also works as long as there is no 1:M mapping in sight assert(sicmp("ΌΎ", "όύ") == 0); // things like the following won't get matched as equal // Greek small letter iota with dialytika and tonos assert(sicmp("ΐ", "\u03B9\u0308\u0301") != 0); // while icmp has no problem with that assert(icmp("ΐ", "\u03B9\u0308\u0301") == 0); assert(icmp("ΌΎ", "όύ") == 0);
Does case insensitive comparison of str1 and str2. Follows the rules of full case-folding mapping. This includes matching as equal german ß with "ss" and other 1:M code point mappings unlike sicmp . The cost of icmp being pedantically correct is slightly worse performance.
assert(icmp("Rußland", "Russland") == 0); assert(icmp("ᾩ -> \u1F70\u03B9", "\u1F61\u03B9 -> ᾲ") == 0);
Returns the combining class of ch.
// shorten the code alias CC = combiningClass; // combining tilda assert(CC('\u0303') == 230); // combining ring below assert(CC('\u0325') == 220); // the simple consequence is that "tilda" should be // placed after a "ring below" in a sequence
Unicode character decomposition type.
Canonical decomposition. The result is canonically equivalent sequence.
Compatibility decomposition. The result is compatibility equivalent sequence.
Try to canonically compose 2 characters. Returns the composed character if they do compose and dchar.init otherwise.
The assumption is that first comes before second in the original text, usually meaning that the first is a starter.
assert(compose('A','\u0308') == '\u00C4'); assert(compose('A', 'B') == dchar.init); assert(compose('C', '\u0301') == '\u0106'); // note that the starter is the first one // thus the following doesn't compose assert(compose('\u0308', 'A') == dchar.init);
Returns a full Canonical (by default) or Compatibility decomposition of character ch. If no decomposition is available returns a Grapheme with the ch itself.
import std.algorithm; assert(decompose('Ĉ')[].equal("C\u0302")); assert(decompose('D')[].equal("D")); assert(decompose('\uD4DC')[].equal("\u1111\u1171\u11B7")); assert(decompose!Compatibility('¹').equal("1"));
Decomposes a Hangul syllable. If ch is not a composed syllable then this function returns Grapheme containing only ch as is.
import std.algorithm; assert(decomposeHangul('\uD4DB')[].equal("\u1111\u1171\u11B6"));
Try to compose hangul syllable out of a leading consonant (lead), a vowel and optional trailing consonant jamos.
On success returns the composed LV or LVT hangul syllable.
If any of lead and vowel are not a valid hangul jamo
of the respective character class returns dchar.init.
assert(composeJamo('\u1111', '\u1171', '\u11B6') == '\uD4DB'); // leaving out T-vowel, or passing any codepoint // that is not trailing consonant composes an LV-syllable assert(composeJamo('\u1111', '\u1171') == '\uD4CC'); assert(composeJamo('\u1111', '\u1171', ' ') == '\uD4CC'); assert(composeJamo('\u1111', 'A') == dchar.init); assert(composeJamo('A', '\u1171') == dchar.init);
Enumeration type for normalization forms, passed as template parameter for functions like normalize .
Shorthand aliases from values indicating normalization forms.
Returns input string normalized to the chosen form. Form C is used by default.
For more information on normalization forms see the normalization section.
// any encoding works wstring greet = "Hello world"; assert(normalize(greet) is greet); // the same exact slice // An example of a character with all 4 forms being different: // Greek upsilon with acute and hook symbol (code point 0x03D3) assert(normalize!NFC("ϓ") == "\u03D3"); assert(normalize!NFD("ϓ") == "\u03D2\u0301"); assert(normalize!NFKC("ϓ") == "\u038E"); assert(normalize!NFKD("ϓ") == "\u03A5\u0301");
Tests if dchar ch is always allowed (Quick_Check=YES) in normalization form norm.
// e.g. Cyrillic is always allowed, so is ASCII assert(allowedIn!NFC('я')); assert(allowedIn!NFD('я')); assert(allowedIn!NFKC('я')); assert(allowedIn!NFKD('я')); assert(allowedIn!NFC('Z'));
Whether or not c is a Unicode whitespace character. (general Unicode category: Part of C0(tab, vertical tab, form feed, carriage return, and linefeed characters), Zs, Zl, Zp, and NEL(U+0085))
Return whether c is a Unicode lowercase character.
Return whether c is a Unicode uppercase character.
Converts s to lowercase (by performing Unicode lowercase mapping) in place. For a few characters string length may increase after the transformation, in such a case the function reallocates exactly once. If s does not have any uppercase characters, then s is unaltered.
Converts s to uppercase (by performing Unicode uppercase mapping) in place. For a few characters string length may increase after the transformation, in such a case the function reallocates exactly once. If s does not have any lowercase characters, then s is unaltered.
Returns a string which is identical to s except that all of its characters are converted to lowercase (by preforming Unicode lowercase mapping). If none of s characters were affected, then s itself is returned.
Returns a string which is identical to s except that all of its characters are converted to uppercase (by preforming Unicode uppercase mapping). If none of s characters were affected, then s itself is returned.
Returns whether c is a Unicode alphabetic character (general Unicode category: Alphabetic).
Returns whether c is a Unicode mark (general Unicode category: Mn, Me, Mc).
Returns whether c is a Unicode numerical character (general Unicode category: Nd, Nl, No).
Returns whether c is a Unicode punctuation character (general Unicode category: Pd, Ps, Pe, Pc, Po, Pi, Pf).
Returns whether c is a Unicode symbol character (general Unicode category: Sm, Sc, Sk, So).
Returns whether c is a Unicode graphical character (general Unicode category: L, M, N, P, S, Zs).
Returns whether c is a Unicode control character (general Unicode category: Cc).
Returns whether c is a Unicode formatting character (general Unicode category: Cf).
Returns whether c is a Unicode Private Use code point (general Unicode category: Co).
Returns whether c is a Unicode surrogate code point (general Unicode category: Cs).
Returns whether c is a Unicode high surrogate (lead surrogate).
Returns whether c is a Unicode low surrogate (trail surrogate).
Returns whether c is a Unicode non-character i.e. a code point with no assigned abstract character. (general Unicode category: Cn)