latex一个实例(包含各种包)

版权声明:提倡知识共享,可以转载和使用 https://blog.csdn.net/Mr_Cat123/article/details/82632612
% !Mode::"TeX:UTF-8"
\documentclass{article}
\usepackage[UTF8]{ctex}
\usepackage{listings}
\usepackage{amsthm}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage[table]{xcolor}
\usepackage{fancyhdr}
\usepackage{lastpage}
\usepackage{pythonhighlight}
\pagestyle{fancy}


\cfoot{Page \thepage of \pageref{LastPage}}
\usepackage[top=2.54cm, bottom=2.54cm, left=3.18cm,right=3.18cm]{geometry}
\lstset{language=Python}
\lstset{breaklines}
\lstset{extendedchars=false}

\begin{document}
%\begin{CJK*}{UTF8}{gbsn}
\title{AI第一次作业}

\date{July 2018}

\maketitle
{\tableofcontents}
\section{问题一}
证明Information gain $\ge 0$ (用香农熵):
解:信息增益表示度量X对预测Y的能力,表达式如下:
$$Gain(X,Y)=H(Y)-H(Y|X)$$
条件信息熵为:
$$H(Y|X)=\sum_{i}p(x_i)H(Y|X=x_i);p(X=x_i)=p(x_i)$$
简单写成:
$$H(Y|X)=\sum_{x\in X}p(x)H(Y|x)$$
已知Shannon Entropy:
$$H(X)=\sum_{i=1}^{m}p_i\log_2\frac{1}{p_i};p(X=x_i)=p(x_i)$$
简单写成:
$$H(X)=\sum_{x\in X}p(x)\log_2\frac{1}{p(x)}=-\sum_{x\in X}p(x)\log_2p(x)$$
所以信息增益为:

\begin{equation}\label{information gain}
\begin{aligned}
  Gain(X,Y)&=H(Y)-H(Y|X)\\
  &=H(X)-H(X|Y)\\
  &=H(X)+H(Y)-H(X,Y)
\end{aligned}
\end{equation}

\section{Motivation}
%\end{CJK*}
\end{document} 

上面是代码,下面是输出

% !Mode::”TeX:UTF-8”
\documentclass{article}
\usepackage[UTF8]{ctex}
\usepackage{listings}
\usepackage{amsthm}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage[table]{xcolor}
\usepackage{fancyhdr}
\usepackage{lastpage}
\usepackage{pythonhighlight}
\pagestyle{fancy}

\cfoot{Page \thepage of \pageref{LastPage}}
\usepackage[top=2.54cm, bottom=2.54cm, left=3.18cm,right=3.18cm]{geometry}
\lstset{language=Python}
\lstset{breaklines}
\lstset{extendedchars=false}

\begin{document}
%\begin{CJK*}{UTF8}{gbsn}
\title{AI第一次作业}

\date{July 2018}

\maketitle
{\tableofcontents}
\section{问题一}
证明Information gain 0 (用香农熵):
解:信息增益表示度量X对预测Y的能力,表达式如下:

G a i n ( X , Y ) = H ( Y ) H ( Y | X )

条件信息熵为:
H ( Y | X ) = i p ( x i ) H ( Y | X = x i ) ; p ( X = x i ) = p ( x i )

简单写成:
H ( Y | X ) = x X p ( x ) H ( Y | x )

已知Shannon Entropy:
H ( X ) = i = 1 m p i log 2 1 p i ; p ( X = x i ) = p ( x i )

简单写成:
H ( X ) = x X p ( x ) log 2 1 p ( x ) = x X p ( x ) log 2 p ( x )

所以信息增益为:

\begin{equation}\label{information gain}\begin{aligned}  Gain(X,Y)&=H(Y)-H(Y|X)\\  &=H(X)-H(X|Y)\\  &=H(X)+H(Y)-H(X,Y)\end{aligned}\end{equation}

\section{Motivation}
%\end{CJK*}
\end{document}

猜你喜欢

转载自blog.csdn.net/Mr_Cat123/article/details/82632612