论坛敏感字过滤的方法

对于java servlet而言论坛敏感语言的过滤可以用自己的过滤器实现:
就是用别的字来替代敏感字就行了,实现方法如下,如果有更好的方法,希望各位路过的
大神能指点指点

首先继承HttpServletRequestWrapper类,重写父类方法,
对于其中的 public String getParameter和 public String getParameterValue进行重写
因为后台代码就是根据这两个的方法和前台的数据进行交互的。
自己还可以再写一个replace,进行敏感字的过滤。
方法如下:



public String replace(String str) {
StringBuffer sb = new StringBuffer(str);
Set<String> keys = this.getMap().keySet();
Iterator<String> it = keys.iterator();
String ss = null;
while (it.hasNext()) {
    String key = it.next();
    System.out.println("key=" + key);
    int index = sb.indexOf(key);
    System.out.println("index=" + index);
    if (index != -1) {
if (key != null) {
    ss = key;
}
sb.replace(index, index + key.length(), this.getMap().get(key));
    }
}
System.out.println("ss=" + ss);
System.out.println("过滤后的content=" + sb.toString());
if (ss != null) {
    if (sb.toString().indexOf(ss) == -1) {
return sb.toString();
    } else {
System.out.println("进来了!");
return replace(sb.toString());
    }
}
return sb.toString();
    }
代码中的Map是敏感字:
可以把敏感字编辑在文件中
如图:


然后在自己的过滤器中重构request对象,这样就可以进行敏感字过滤了
具体代码如下:


web.xml配置如下:

    <filter>
<filter-name>WordsFilter</filter-name>
<filter-class>com.fyz.****.WordsFilter</filter-class>
<init-param>
    <param-name>filePath</param-name>
    <param-value>/WEB-INF/word.txt</param-value>
</init-param>
    </filter>
    <filter-mapping>
<filter-name>WordsFilter</filter-name>
<url-pattern>/checkLogin_Note</url-pattern>
    </filter-mapping>



重构的request:
package com.fyz.***.wrapper;

import java.util.Iterator;
import java.util.Map;
import java.util.Set;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletRequestWrapper;

public class HttpReqWrapper extends HttpServletRequestWrapper {

    private Map<String, String> map = null;

    public HttpReqWrapper(HttpServletRequest request) {
super(request);
    }
    //过滤脏话

    public String replace(String str) {
StringBuffer sb = new StringBuffer(str);
Set<String> keys = this.getMap().keySet();
Iterator<String> it = keys.iterator();
String ss = null;
while (it.hasNext()) {
    String key = it.next();
    System.out.println("key=" + key);
    int index = sb.indexOf(key);
    System.out.println("index=" + index);
    if (index != -1) {
if (key != null) {
    ss = key;
}
sb.replace(index, index + key.length(), this.getMap().get(key));
    }
}
System.out.println("ss=" + ss);
System.out.println("过滤后的content=" + sb.toString());
if (ss != null) {
    if (sb.toString().indexOf(ss) == -1) {
return sb.toString();
    } else {
System.out.println("进来了!");
return replace(sb.toString());
    }
}
return sb.toString();
    }

// 重写getParameter()方法
    public String getParameter(String str) {

    String content = super.getParameter(str);
    System.out.println("还没过滤的content=" + content);
    return replace(content);

    }

    public Map<String, String> getMap() {
return map;
    }

    public void setMap(Map<String, String> map) {
this.map = map;
    }
}


自己的过滤器:



package com.fyz.***..filter;

import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Set;

import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletContext;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletRequest;

import com.lovo.cq.shoppingbook.wrapper.HttpReqWrapper;

public class WordsFilter implements Filter {

    private Map<String, String> map = new HashMap<String, String>();
    //过滤器的初始化

    public void init(FilterConfig config) throws ServletException {
String filePath = config.getInitParameter("filePath");//从配置文件中取得文件的相对路径
ServletContext context = config.getServletContext();
String realPath = context.getRealPath(filePath);//根据相对路径取得绝对路径
try {
    FileReader freader = new FileReader(realPath);//根据绝对路径,通过文件流来读取文件
    BufferedReader br = new BufferedReader(freader);
    String line = null;
    while ((line = br.readLine()) != null) {
String[] str = line.split("=");
map.put(str[0], str[1]);
    }
} catch (FileNotFoundException e) {
    e.printStackTrace();
} catch (IOException e) {
    e.printStackTrace();
}


    }

    public void doFilter(ServletRequest request, ServletResponse response,
    FilterChain chain) throws IOException, ServletException {
//乱码处理
request.setCharacterEncoding("gb2312");
response.setContentType("text/html;charset=gb2312");
HttpServletRequest HttpReq = (HttpServletRequest) request;
HttpReqWrapper hrw = new HttpReqWrapper(HttpReq);
hrw.setMap(map);
chain.doFilter(hrw, response);
    }

    public void destroy() {
System.out.println("--过滤器的销毁--");
    }
}



本文中的乱码处理的不够详细,具体的乱码处理,可以一并在request的重写方法中
public String getParameter和 public String getParameterValue进行重写
然后如果想进行压缩流处理(本人的压缩流理论没有问题,但是问题还是你看前面的博客吧),
请看前面的博客




猜你喜欢

转载自fengyanzhang.iteye.com/blog/1859894