opentack R版安装整理

all-in-one
系统版本:VERSION="18.04.2 LTS (Bionic Beaver)"
openstack版本:R版
网址:https://docs.openstack.org/install-guide/openstack-services.html

------------------------------------------------------------------------------------
基础环境准备
1.安全
设置密码方式:组件名123,例如nova123
2.host文件
统一/etc/hosts文件 添加格式为: ip controller
3.NTP时间同步
可以使用chrony作为ntp服务,查看命令chronyc sources,手动同步:ntpdate ntp1.aliyun.com
常用ntp服务:
NTP授时快速域名服务:cn.ntp.org.cn
ntp1.aliyun.com
命令:apt -y install chrony
/etc/chrony/chrony.conf注释掉pool的行,添加server cn.ntp.org.cn iburst
#systemctl start chrony && systemctl enable chrony
4.源环境准备
可以使用163等国内源,包下载比较快
命令:
# apt install software-properties-common
# add-apt-repository cloud-archive:rocky ##目前只能ubuntu 18.04可以使用
# apt update && apt -y dist-upgrade ##升级了内核需要重启
# apt install -y python-openstackclient ##penstack命令来自python-openstackclient
5.数据库
使用mariadb作为关系型数据库
命令:
# apt -y install mariadb-server python-pymysql
修改配置文件,创建并编辑/etc/mysql/mariadb.conf.d/99-openstack.cnf
[mysqld]
bind-address = 192.168.137.134 ##controller节点ip地址

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
# systemctl start mariadb && systemctl enable mariadb
6.消息队列
RabbitMQ 即一个消息队列,主要是用来实现应用程序的异步和解耦,同时也能起到消息缓冲,消息分发的作用。 RabbitMQ使用的是AMQP协议,它是一种二进制协议。 默认启动端口5672
# apt -y install rabbitmq-server
# rabbitmqctl add_user openstack rabbit123 ##修改密码rabbitmqctl change_password openstack rabbit123
# rabbitmqctl set_permissions openstack ".*" ".*" ".*" ## set_permissions [-p <vhost>] <user> <conf> <write> <read> 允许openstack用户配置和读写

7.memcached
Memcached 是一个高性能的分布式内存对象缓存系统,用于动态Web应用以减轻数据库负载。它通过在内存中缓存数据和对象来减少读取数据库的次数,从而提供动态、数据库驱动网站的速度
命令:
# apt -y install memcached python-memcache
修改/etc/memcached.conf 把-l 127.0.0.1 修改成-l 192.168.137.134 控制节点管理ip地址
# systemctl start memcached && systemctl enable memcached
8.etcd
etcd"这个名字源于两个想法,即 unix "/etc" 文件夹和分布式系统"d"istibuted。 "/etc" 文件夹为单个系统存储配置数据的地方,而 etcd 存储大规模分布式系统的配置信息。因此,"d"istibuted 的 "/etc" ,是为 "etcd"。etcd 以一致和容错的方式存储元数据。分布式系统使用 etcd 作为一致性键值存储,用于配置管理,服务发现和协调分布式工作。使用 etcd 的通用分布式模式包括领导选举,分布式锁和监控机器活动。
命令:
# apt -y install etcd
修改配置文件/etc/default/etcd
ETCD_NAME="controller"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER="controller=http://192.168.137.134:2380"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.137.134:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.137.134:2379"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.137.134:2379"
# systemctl start etcd && systemctl enable etcd

--------------------------------------------------------------------------------
Openstack 组件安装配置

keystone
1.Keystone(OpenStack Identity Service)是OpenStack框架中,负责身份验证、服务规则和服务令牌的功能, 它实现了OpenStack的Identity API。Keystone类似一个服务总线, 或者说是整个Openstack框架的注册表, 其他服务通过keystone来注册其服务的Endpoint(服务访问的URL),任何服务之间相互的调用, 需要经过Keystone的身份验证, 来获得目标服务的Endpoint来找到目标服务。User,Credentials,Authentication,Token,Tenant,Service,Endpoint,Role
https://www.cnblogs.com/yuki-lau/archive/2013/01/04/2843918.html
2.添加该组件的数据库
# mysql
> create database keystone; ##创建keystone数据库
> grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'keystone123'; ##keystone用户对keytone库授权
> grant all privileges on keystone.* to 'keystone'@'%' identified by 'keystone123';
> flush privileges;
> delete from mysql.user where host='%' and user='keystone'; ##删除表的相应的数据
> select * from mysql.user where user='keystone'; ##查看
3.安装配置
# apt -y install keystone apache2 libapache2-mod-wsgi
/etc/keystone/keystone.conf:修改配置文件
[database]
connection = mysql+pymysql://keystone:keystone123@controller/keystone
[token]
provider = fernet
# su -s /bin/sh -c "keystone-manage db_sync" keystone ##填充标识服务器数据库,keystone数据有44个表,没有回显
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone ###初始化Fernet密钥存储库
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
# keystone-manage bootstrap --bootstrap-password admin123 \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
修改配置文件:/etc/apache2/apache2.conf
ServerName controller ##默认没有这行
# systemctl start apache2 && systemctl enable apache2 ##开启并开机自启,每个服务都需这样
到入环境变量:
export OS_USERNAME=admin
export OS_PASSWORD=admin123
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
创建域、项目、用户和角色:
# openstack domain create --description "An Example Domain" example ##有输出,创建域
# openstack project create --domain default --description "Service Project" service ##有输出,创建项目
# openstack project create --domain default --description "Demo Project" myproject ##有输出,创建项目2
# openstack user create --domain default --password-prompt myuser ##有输出,创建用户,密码myuser123
# openstack role create myrole ##有输出,创建角色
# openstack role add --project myproject --user myuser myrole ##没有输出,把用户添加某种角色
验证:
# unset OS_AUTH_URL OS_PASSWORD ##去掉环境变量认证地址,和密码
# openstack --os-auth-url http://controller:5000/v3 \
> --os-project-domain-name Default --os-user-domain-name Default \
> --os-project-name admin --os-username admin token issue ##以admin的身份获取token
# openstack --os-auth-url http://controller:5000/v3 \
> --os-project-domain-name Default --os-user-domain-name Default \
> --os-project-name myproject --os-username myuser token issue ##以myuser的身份获取token
创建环境变量:
admin-openrc.sh和demo-openrc.sh,并添加执行权限chmod +x admin-openrc.sh
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin ##demo-openrc.sh使用myproject
export OS_USERNAME=admin ##demo-openrc.sh使用myuser
export OS_PASSWORD=admin123 ##demo-openrc.sh使用myuser123
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
# . admin-openrc.sh ##导入环境变量
# openstack token issue ##生成token
--------------------------------------------------------

glance
1.Glance是openstack的镜像服务。它提供了虚拟镜像的查询、注册和传输等服务。Glance本身并不实现对镜像的存储的存储功能。Glance只是一个代理。它充当了镜像存储服务与Openstack的其他组件之间的纽带。
2.添加该组件的数据库
# mysql
> create database glance;
> grant all privileges on glance.* to 'glance'@'localhost' identified by 'glance123';
> grant all privileges on glance.* to 'glance'@'%' identified by 'glance123';
> flush privileges;
3.添加用户和服务以及endpoint
# openstack user create --domain default --password-prompt glance ##密码设置为glance123
# openstack role add --project service --user glance admin ##没有回显
# openstack service create --name glance --description "OpenStack Image" image ##创建image服务
# openstack endpoint create --region RegionOne image public http://controller:9292 ##创建endpoint,public
# openstack endpoint create --region RegionOne image internal http://controller:9292 ##internal
# openstack endpoint create --region RegionOne image admin http://controller:9292 ##admin
4.安装配置
# apt install glance -y
修改配置文件:/etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:glance123@controller/glance ##在database下添加该行
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance123 ##只用修改,用户glance的密码
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
修改配置文件:/etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:glance123@controller/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance123
[paste_deploy]
flavor = keystone
# su -s /bin/sh -c "glance-manage db_sync" glance ##有回显,导入15个表。
# systemctl restart glance-api glance-registry && systemctl enable glance-api glance-registry ##开启并自启动服务,这里是重启服务才会生效配置文件
5.验证
# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img ##下载测试镜像cirros,13M左右
# openstack image create "cirros" --file cirros-0.4.0-x86_64-disk.img --disk-format qcow2 --container-format bare --public ##创建镜像
# openstack image list ##查看存在的镜像

--------------------------------------------------------

controller-nova
1.控制节点http://www.cnblogs.com/horizonli/p/5172216.html
控制节点包括网络控制、调度管理、api服务、存储卷管理、数据库管理、身份管理和镜像管理等和Dashboard相关的服务,当然为了支撑这些服务,该节点要需要安装SQL,MQ和NTP服务
2.数据库配置
# mysql
> create database nova_api;
> create database nova;
> create database nova_cell0;
> create database placement;
> grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'nova123'; ##nova和nova_cell0,placement同样赋权
> grant all privileges on nova_api.* to 'nova'@'%' identified by 'nova123';
3.创建compute服务的凭证
# openstack user create --domain default --password-prompt nova ##密码为nova123
# openstack role add --project service --user nova admin
# openstack service create --name nova --description "OpenStack Compute" compute
# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

# openstack user create --domain default --password-prompt placement
# openstack role add --project service --user placement admin
# openstack service create --name placement --description "Placement API" placement
# openstack endpoint create --region RegionOne placement public http://controller:8778
# openstack endpoint create --region RegionOne placement internal http://controller:8778
# openstack endpoint create --region RegionOne placement admin http://controller:8778

4 安装配置nova相关软件
# apt -y install nova-api nova-conductor nova-consoleauth nova-novncproxy nova-scheduler nova-placement-api
# 配置文件:/etc/nova/nova.conf
[api_database]
connection = mysql+pymysql://nova:nova123@controller/nova_api

[database]
connection = mysql+pymysql://nova:nova123@controller/nova

[placement_database]
connection = mysql+pymysql://placement:placement123@controller/placement

[DEFAULT]
transport_url = rabbit://openstack:rabbit123@controller
my_ip = 192.168.137.134
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]
auth_strategy = keystone

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova123

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement123 ##改成自己设置的密码

# su -s /bin/sh -c "nova-manage api_db sync" nova ##没有回显,填充nova_api和placement数据库,都是32个表
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova ##没有回显注册cell0库
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova ##有uuid的回显
# su -s /bin/sh -c "nova-manage db sync" nova ##110个表
# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova ##验证cell0和cell1
# systemctl restart nova-api nova-consoleauth nova-scheduler nova-conductor nova-novncproxy

------------------------
nova-compute 计算节点
1.计算节点运行虚拟机实例,默认情况下是用KVM虚拟化引擎,计算节点也需要安装网络代理服务,通过网络代理把实例连接到虚拟网络
2.安装和配置
# apt install nova-compute -y
修改配置文件:/etc/nova/nova.conf 因为是计算和控制在一台机器上,只需多加vnc下url行即可
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html ##比控制节点多加个vnc地址
# egrep -c '(vmx|svm)' /proc/cpuinfo ##查看是否支持虚拟化,如果结果大于等于1,该机器虚拟机的支持硬件加速
如果结果是0,修改配置文件:/etc/nova/nova-compute.conf
[libvirt]
virt_type = qemu
# systemctl restart nova-compute
3.验证
# openstack compute service list --service nova-compute ##查看计算节点信息
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova ##发现计算节点
修改controller节点的配置文件:/etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300

验证以上安装是否有问题:
# openstack compute service list ##查看服务
# openstack catalog list ##查看api的endpoint
# openstack image list ##查看镜像
# nova-status upgrade check ##查看cells and placement API是否工作


------------------------------
neutron 网络provider network
1.控制节点
# mysql
> CREATE DATABASE neutron;
> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron123';
> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron123';
# openstack user create --domain default --password-prompt neutron
# openstack role add --project service --user neutron admin
# openstack service create --name neutron --description "OpenStack Networking" network
# openstack endpoint create --region RegionOne network public http://controller:9696
# openstack endpoint create --region RegionOne network internal http://controller:9696
# openstack endpoint create --region RegionOne network admin http://controller:9696
安装包
# apt -y install neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent
配置文件:/etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:neutron123@controller/neutron
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:rabbit123@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron123
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova123
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
修改配置文件:/etc/neutreon/plugin/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true
修改配置文件:/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDrive
# sysctl -a | grep net.bridge.bridge-nf-call-ip
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
修改配置文件:/etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

修改配置文件:/etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = metadata123 ##设置密码
修改配置文件:/etc/nova/nova.conf
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron123
service_metadata_proxy = true
metadata_proxy_shared_secret = metadata123
# su -s /bin/sh -c "neuntro-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron ##填充neutron数据库.167个表,有回显
# systemctl restart nova-api
# systemctl restart neutron-server neutron-linuxbridge-agent neutron-dhcp-agent neutron-metadata-agent

2.计算节点
# apt install neutron-linuxbridge-agent -y
# systemctl restart nova-compute neutron-linuxbridge-agent


3.验证
# openstack extension list --network ##列出加载的扩展名,以验证是否成功启动了中子服务器进程
# openstack network agent list ##列出网络agent

------------------------------------------------------
horizon-dashboard
1.Horizon是一个用以管理、控制OpenStack服务的Web控制面板,它可以管理实例、镜像、创建密匙对,对实例添加卷、操作Swift容器等。除此之外,用户还可以在控制面板中使用终端(console)或VNC直接访问实例。

2.安装配置
# apt -y install openstack-dashboard
修改配置文件:/etc/openstack-dashboard/local_settings.py
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': False,
}
TIME_ZONE = "Asia/Shanghai"
配置文件:/etc/apache2/conf-available/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
# systemctl restart apache2
http://controller/horizon
---------------------------------------------
cinder-controller
1.为Openstack提供块存储服务
2.添加数据库
# mysql
> create database cinder;
> grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'cinder123';
> grant all privileges on cinder.* to 'cinder'@'%' identified by 'cinder123';
# openstack user create --domain default --password-prompt cinder
# openstack role add --project service --user cinder admin
# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s


3.安装配置
# apt install cinder-api cinder-scheduler -y
修改配置文件:/etc/cinder/cinder.conf
[database]
connection = mysql+pymysql://cinder:cinder123@controller/cinder
[DEFAULT]
transport_url = rabbit://openstack:rabbit123@controller
auth_strategy = keystone
my_ip = 192.168.137.134
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder123
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
# su -s /bin/sh -c "cinder-manage db sync" cinder ##填充cinder库,35个表
修改配置文件:/etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
# systemctl restart nova-api cinder-scheduler apache2

----------------------------
cinder-存储
# pvcreate /dev/sdb
# vgcreate cinder-volumes /dev/sdb
修改配置文件:
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
[DEFAULT]
enabled_backends = lvm
glance_api_servers = http://controller:9292
# systemctl restart tgt cinder-scheduler cinder-volume
-----------------
cinder-backup(可选)
# apt install cinder-backup -y
修改配置文件:/etc/cinder/cinder.conf
[DEFAULT]
backup_driver = cinder.backup.drivers.swift
backup_swift_url = SWIFT_URL ##使用openstack catalog show object-store查看SWIFT_URL
验证:
# openstack volume service list

--------------------------------------------------------
创建instance
1.创建网络
# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
# openstack subnet create --network provider --allocation-pool start=203.0.113.101,end=203.0.113.250 --dns-nameserver 8.8.4.4 --gateway 203.0.113.1 --subnet-range 203.0.113.0/24 provider
# openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
# openstack security group rule create --proto icmp default
# openstack security group rule create --proto tcp --dst-port 22 default

# openstack flavor list
# openstack image list
# openstack network list
# openstack security group list
# openstack server create --flavor m1.nano --image cirros --nic net-id=02b2e397-ebe0-4656-9a43-12bcfcc7b243 --security-group default provider-instance
# openstack server list
# openstack console url show provider-instance

猜你喜欢

转载自www.cnblogs.com/guoguodelu/p/10929289.html
今日推荐