Interest in High-Logic's Support for going Beyond Unicode
Posted: Thu May 12, 2016 8:00 am
I have an Interest in cataloguing the full range of what I call Litmarks: the range of characters used in both print & handwriting throughout time. What do you know that could help me do this?
• http://www.unicode.org/charts/ is actually the basis for my Inquiry. I am looking for a database that can keep track of the various languages, the languages' endonyms (how each language is referred to by those that converse in it, as well the native names for each of its Litmarks) and have the various glyphs all differentiated properly, so there would be several Z's (not just the regular one that sorts after Y, but including one that sorts after G the way Latin did originally, and one between S and T for the Estonian one, etc.
In older English, & was an uppercase letter that had a lower-case variant which is still current in most people's handwriting.
Also, there needs to be a Lithuanian lowercase i, which keeps the dot automatically when adding diacritics, and a Turkish lowercase i, which pairs automatically with İ, and a Turkish I which pairs automatically with ı. Also there needs to be two different ŋ's, because the Sámi capital looks different from the African one.
Again, when we use the Han scripts, there needs to be disunification because many characters look different from even between Mandarin, Japanese, and Cantonese, and the various other languages need to be equally represented.
Then there are the various errors in kerning such as the two-letter "rn" combo looking like the single-letter "m" in many fonts, and minus-signs etc., that are at the wrong height. I like the idea that Google has with their Noto series, though I note some design flaws there as well. I need to be able to swap some characters from other fonts to take their place. Ideally I'd like to reduce my fonts to about twice what Noto is, all comprehensive fonts, and getting completely rid of the fonts I don't like and having their characters all replaced with ones I do like.
• http://www.unicode.org/charts/ is actually the basis for my Inquiry. I am looking for a database that can keep track of the various languages, the languages' endonyms (how each language is referred to by those that converse in it, as well the native names for each of its Litmarks) and have the various glyphs all differentiated properly, so there would be several Z's (not just the regular one that sorts after Y, but including one that sorts after G the way Latin did originally, and one between S and T for the Estonian one, etc.
In older English, & was an uppercase letter that had a lower-case variant which is still current in most people's handwriting.
Also, there needs to be a Lithuanian lowercase i, which keeps the dot automatically when adding diacritics, and a Turkish lowercase i, which pairs automatically with İ, and a Turkish I which pairs automatically with ı. Also there needs to be two different ŋ's, because the Sámi capital looks different from the African one.
Again, when we use the Han scripts, there needs to be disunification because many characters look different from even between Mandarin, Japanese, and Cantonese, and the various other languages need to be equally represented.
Then there are the various errors in kerning such as the two-letter "rn" combo looking like the single-letter "m" in many fonts, and minus-signs etc., that are at the wrong height. I like the idea that Google has with their Noto series, though I note some design flaws there as well. I need to be able to swap some characters from other fonts to take their place. Ideally I'd like to reduce my fonts to about twice what Noto is, all comprehensive fonts, and getting completely rid of the fonts I don't like and having their characters all replaced with ones I do like.