错误描述:在50075查看hdfs数据文件中有 & 符号时,中文数据乱乱码
错误原因:系统默认字符集:Charset.defaultCharset().name() 为UTF-8,但是在系统在web程序中取出得编码值为:US-ASCII。在字符串编码时,默认使用的是US-ASCII字符集,该字符集应该是UTF-8子集中的单字符集,导致中文无法由byte拼装还原。
参考:String类
public String(byte bytes[], int offset, int length)
public byte[] getBytes()
URL地址:http://datanode:50075/browseBlock.jsp?blockId=1073779813&blockSize=15&genstamp=1099511816876&filename=%2Ftmp%2Fwankun%2Faccountinput%2Fd&datanodePort=50010&namenodeInfoPort=50070&nnaddr=192.168.39.123:8020
问题解决:
包:hadoop-common-2.3.0-cdh5.0.1.jar
类:org.apache.hadoop.http.HtmlQuoting
修改代码:
public static String quoteHtmlChars(String item) {
if (item == null) {
return null;
}
byte[] bytes = item.getBytes(Charsets.UTF_8);
if (needsQuoting(bytes, 0, bytes.length)) {
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
try {
quoteHtmlChars(buffer, bytes, 0, bytes.length);
return buffer.toString("UTF-8");
} catch (IOException ioe) {
// Won't happen, since it is a bytearrayoutputstream
}
return item;
} else {
return item;
}
}
备注:
1. UTF-8={US-ASCII + 多字符集}
2. 字符串解码还原时,使用的是UTF-8字符集
3. 单独测试,系统默认用UTF-8进行编码,但是HtmlQuoting中却用US-ASCII编码