[ruby-core:102314] [Ruby master Bug#17594] Sort order of UTF-16LE is based on binary representation instead of codepoints
From:
daniel@...42.com
Date:
2021-01-30 02:14:11 UTC
List:
ruby-core #102314
Issue #17594 has been reported by Dan0042 (Daniel DeLorme).
----------------------------------------
Bug #17594: Sort order of UTF-16LE is based on binary representation instead of codepoints
https://bugs.ruby-lang.org/issues/17594
* Author: Dan0042 (Daniel DeLorme)
* Status: Open
* Priority: Normal
* Backport: 2.5: UNKNOWN, 2.6: UNKNOWN, 2.7: UNKNOWN, 3.0: UNKNOWN
----------------------------------------
I just discovered that string sorting is always based on bytes, so the order of UTF-16LE strings will give some peculiar results:
```ruby
BE, LE = 'UTF-16BE', 'UTF-16LE'
str = [*0..0x4ff].pack('U*').scan(/\p{Ll}/).join
puts str.encode(BE).chars.sort.first(50).join.encode('UTF-8')
#abcdefghijklmnopqrstuvwxyzµßàáâãäåæçèéêëìíîïðñòóôõ
puts str.encode(LE).chars.sort.first(50).join.encode('UTF-8')
#āȁăȃąȅćȇĉȉċȋčȍďȏđȑēȓĕȕėȗęșěțĝȝğȟġȡģȣĥȥħȧĩȩīȫĭȭįȯаı
'a'.encode(BE) < 'ā'.encode(BE) #=> true
'a'.encode(LE) < 'ā'.encode(LE) #=> false
```
Is this supposed to be correct? I mean, I somewhat understand the idea of just sorting by bytes, but I find the above output to be remarkably nonsensical.
A similar/related issue was found and fixed in #8653, so there's precedent for considering codepoints instead of bytes.
The reason I'm asking is because I was working on some optimizations for `String#casecmp` (https://github.com/ruby/ruby/pull/4133) which, as a side-effect, sort by codepoint for UTF-16LE. And that resulted in a different order for `<=>` vs `casecmp`, and thus some tests broke. But I think sorting by codepoint would be better in this case.
--
https://bugs.ruby-lang.org/
Unsubscribe: <mailto:ruby-core-request@ruby-lang.org?subject=unsubscribe>
<http://lists.ruby-lang.org/cgi-bin/mailman/options/ruby-core>