switch 卡带 热插拔的热插拔是什么意思

后使用快捷导航没有帐号?
手机&智能终端
移动应用开发
查看: 460|回复: 2
[SIM]双SIM卡,独立热插拔,且支持3G switch,反复热插拔压力测试注意事项
[DESCRIPTION]
双卡,独立热插拔,且支持3G/4G switch,反复热插拔压力测试,需要遵从以下注意事项,否则
会复制出很多莫名其妙不识别卡或者无法注册网络的问题。
[SOLUTION]
关于双SIM卡,且支持4G switch功能,需要注意以下事项,否则会导致sim task和AP rild SIM状态
紊乱,从而无法正确更新SIM的状态。
1、一个卡槽的卡插入后请至少等待1S再拔出卡,拔出卡后,请至少等待2S,在同一卡槽再插入。否
则会出现SIM状态错乱的情况。
两个卡槽之间的拔插间隔需要保证至少3S的间隔。
2、支持4G switch,当拔掉卡1的时候,卡槽2有卡的话,会通过reset 的方式,将4G信号切到
同样,当信号在sim2上,拔出sim2后,若卡槽1有卡,会通过reset modem,将4G信号切到卡槽1.
注:默认只有卡槽1出4G信号。
因此,当拔出一张卡,若出现modem logging is off的提示或者另一张出现搜网状态,请不要立即
拔出另一张卡,因为在modem 重启的过程拔插卡,都会导致SIM相关信息同步出问题。
待看到插入的那张卡的信号由禁网变成搜网再变成稳定的信号后再操作。
关于双卡独立热插拔的压力测试建议按照以下两种方案:
1、打开3G switch的情况:
1)对单一卡槽进行热插拔压力测试:即先测卡槽1,再测卡槽2(测卡槽2的前提为仅卡槽2插卡开机
,确保4G信号保持在SIM2,然后对卡槽2进行多次热插拔测试);
2)两个卡槽交叉测试:必须按照上述1、2中的说明来测试:否则会导致sim状态混乱,固然会出现
识别不正确的情况。
2、关闭3G switch的情况:
此种方式测试比较简易,只需保证1中的热插拔间隔标准。
展开版块导航
获取手机验证码Android热插拔事件处理流程--Vold
一、Android热插拔事件处理流程图
Android热插拔事件处理流程如下图所示:
1. NetlinkManager:
全称是NetlinkManager.cpp位于Android 4.x
源码位置/system/vold/NetlinkManager.cpp。该类的主要通过引用NetlinkHandler类中的onEvent()方法来接收来自内核的事件消息,NetlinkHandler位于/system/vold/NetlinkHandler.cpp。
2. VolumeManager:
全称是VolumeManager.cpp位于Android
4.x源码位置/system/vold/VolumeManager.cpp。该类的主要作用是接收经过NetlinkManager处理过后的事件消息。因为我们这里是SD的挂载,因此经过NetlinkManager处理过后的消息会分为五种,分别是:block,switch,usb_composite,battery,power_supply。这里SD卡挂载的事件是block。
3. DirectVolume:
位于/system/vold/DirectVolume.cpp。该类的是一个工具类,主要负责对传入的事件进行进一步的处理,block事件又可以分为:Add,Removed,Change,Noaction这四种。后文通过介绍Add事件展开。
4. Volume:
位于/system/vold/Volume.cpp,该类是负责SD卡挂载的主要类。Volume.cpp主要负责检查SD卡格式,以及对复合要求的SD卡进行挂载,并通过Socket将消息SD卡挂载的消息传递给NativeDaemonConnector。
5. CommandListener:
该类位于位于/system/vold/CommandListener.cpp。通过vold
socket与NativeDaemonConnector通信。
6. NativeDaemonConnector:
该类位于frameworks/base/services/java/com.android.server/NativeDaemonConnector.java。该类用于接收来自Volume.cpp
发来的SD卡挂载消息并向上传递。
7. MountService:
位于frameworks/base/services/java/com.android.server/MountService.java。MountService是一个服务类,该服务是系统服务,提供对外部存储设备的管理、查询等。在外部存储设备状态发生变化的时候,该类会发出相应的通知给上层应用。在Android系统中这是一个非常重要的类。
8. StorageManaer:
位于frameworks/base/core/java/andriod/os/storage/StorageManager.java。在该类的说明中有提到,该类是系统存储服务的接口。在系统设置中,有Storage相关项,同时Setting也注册了该类的监听器。而StorageManager又将自己的监听器注册到了MountService中,因此该类主要用于上层应用获取SD卡状态。
三、典型流程描述 (SD卡挂载流程)
整个过程从Kernel检测到SD卡插入事件开始,之前的一些硬件中断的触发以及driver的加载这里并不叙述,一直到SD卡挂载消息更新到“Android——系统设置——存储”一项中。
1. Kernel发出SD卡插入uevent。
2. NetlinkHandler::onEvent()接收内核发出的uevent并进行解析。
3. VolumeManager::handlBlockEvent()处理经过第二步处理后的事件。
4. 接下来调用DirectVolume:: handleBlockEvent()。
在该方法中主要有两点需要注意:
第一,程序首先会遍历mPath容器,寻找与event对应的sysfs_path是否存在与mPath容器中。
第二,针对event中的action有4种处理方式:Add,Removed,Change,Noaction 。
例如:在Add
action中会有如下操作(因为我们这里所讲的是SD卡的挂载流程,因此以Add来说明),首先创建设备节点,其次对disk和partition两种格式的设备分别进行处理。SD卡属于disk类型。
5. 经过上一步之后会调用DirectVolume::handleDiskAdded()方法,在该方法中会广播disk
insert消息。
SocketListener::runListener会接收DirectVolume::handleDiskAdded()广播的消息。该方法主要完成对event中数据的获取,通过Socket。(PS:这里的SocketListener.cpp位于Android源码/system/core/libsysutils/src/中,后文的FramworkListener.cpp也是,之前自己找了很久
7. 调用FrameworkListener::onDataAvailable()方法处理接收到的消息内容。
8. FrameworkListener::dispatchCommand()该方法用于分发指令。
在FrameworkListener::dispatchCommand()方法中,通过runCommand()方法去调用相应的指令。
在/system/vold/CommandListener.cpp中有runCommand()的具体实现。在该类中可以找到这个方法:CommandListener::VolumeCmd::runCommand(),从字面意思上来看这个方法就是对Volume分发指令的解析。该方法中会执行“mount”函数:vm-&mountVolume(arg[2])。
mountVolume(arg[2])在VolumeManager::mountVolume()中实现,在该方法中调用v-&mountVol()。
mountVol()方法在Volume::mountVol()中实现,该函数是真正的挂载函数。(在该方法中,后续的处理都在该方法中,在Mount过程中会广播相应的消息给上层,通过setState()函数。)
13. setState(Volume::Checking);广播给上层,正在检查SD卡,为挂载做准备。
14. Fat::check();SD卡检查方法,检查SD卡是否是FAT格式。
15. Fat::doMount()挂载SD卡。
至此,SD的挂载已算初步完成,接下来应该将SD卡挂载后的消息发送给上层,在13中也提到过,在挂载以及检查的过程中其实也有发送消息给上层的。
16. MountService的构造函数中会开启监听线程,用于监听来自vold的socket信息。
Thread thread = new Thread(mConnector,VOLD_TAG);
thread.start();
mConnector是NativeDaemonConnector的对象,NativeDaemonConnector继承了Runnable并Override了run方法。在run方法中通过一个while(true)调用ListenToSocket()方法来实现实时监听。
18. 在ListenToSocket()中,首先建立与Vold通信的Socket
Server端,然后调用MountService中的onDaemonConnected()方法。(PS:Java与Native通信可以通过JNI,那么Native与Java通信就需要通过Socket来实现了。Android中Native与Frameworks通信
这篇文章中有简介,感兴趣的朋友可以参考一下)
onDaemonConnected()方法是在接口INativeDaemonConnectorCallbacks中定义的,MountService实现了该接口并Override了onDaemonConnected()方法。该方法开启一个线程用于更新外置存储设备的状态,主要更新状态的方法也在其中实现。
然后回到ListenToSocket中,通过inputStream来获取Vold传递来的event,并存放在队列中。
然后这些event会在onDaemonConnected()通过队列的”队列.take()”方法取出。并根据不同的event调用updatePublicVolumeState()方法,在该方法中调用packageManagerService中的updateExteralState()方法来更新存储设备的状态。(注:这里不太理解packageManagerService中的unloadAllContainers(args)方法)
更新是通过packageHelper.getMountService().finishMediaUpdate()方法来实现的。
23. 在updatePublicVolumeState()方法中,更新后会执行如下代码:
bl.mListener.onStorageStateChanged();
在Android源码/packages/apps/Settings/src/com.android.settings.deviceinfo/Memory.java代码中,实现了StorageEventListener
的匿名内部类,并Override了onStorageStateChanged();方法。因此在updatePublicVolumeState()中调用onStorageStateChanged();方法后,Memory.java中也会收到。在Memory.java中收到以后会在Setting界面进行更新,系统设置——存储中会更新SD卡的状态。从而SD卡的挂载从底层到达了上层。
1. Vold简介
Vold的全称是volume
daemon。主要负责系统对大容量存储设备(USB/SD)的挂载/卸载任务,它是一个守护进程,该进程支持这些存储外设的热插拔。自Android
2.2开始,Vold升级为vold 2.0,配置文件路径在Android
4.0之后变为/etc/vold.fstab。
2.Vold工作流程
Vold的工作流程大致可以分为三个部分:创建监听、引导、事件处理。
(1)创建监听
创建监听指的是创建监听链接,一方面用于监听来自内核的uevent,另一方面用于监听来自上层的控制命令,这些命令包括控制SD卡的挂载与卸载,这里所说的链接也就是socket。在Android
系统启动的时候,init进程会去解析init.rc文件,在该文件中,有如下代码:
Service vold
/system/bin/vold
Socket vold stream 0660 root mount
Iprio be 2
这样系统会在启动的时候创建与上层通信的socket,此socket
name为"vold"。
4.0源码/system/vold路径下的main.cpp&&SPAN style="COLOR:
rgb(255,0,0)"&NetlinkManager::start():socket(PF_NETLINK,SOCK_DGRAM,NETLINK_KOBJECT_UEVENT)
&中创建了与内核通信的socket。在main.cpp中通过实例化VolumeManager和NetlinkManager时创建。
Vold进程启动时候会对现有的外部存储设备进行检查。首先加载并解析vold.fstab,并检查挂载点是否已被挂载。然后执行SD卡的挂载,最后处理USB大容量存储。因为系统是按行解析的,通过查看vold.fstab可以很清楚的知道这一点。
vold.fatab中最重要的语句:
dev_mount sdcard /mnt/sdcard auto
/devices/platform/rk29_sdmmc.0/mmc_host/mmc0
挂载命令 标签 挂载点 第几个分区 设备的sysfs paths
第几个分区:如果为auto则表示第1个分区。
参数之间不能有空格,只能以tab为间隔(注意:这里为了对齐因此采用空格隔开,如果自行修改vold.fstab之后加以空格的话系统会识别不到的)。
如果vold.fstab解析无误,VolueManager将创建DirectVolume,若vold.fstab解析不存在或者打开失败,Vold将会读取Linux内核中的参数,此时如果参数中存在SDCARD(也就是SD的默认路径),VolumeManager则会创建AutoVolume,如果不存在这个默认路径那么就不会创建。
(3)事件处理
通过对两个socket的监听,完成对事件的处理以及对上层应用的响应。
a) Kernel发出uevent
NetlinkManager检测到kernel发出的uevent,解析后调用NetlinkHandler::onEvent()方法。该方法会分别处理不同的事件,这里重要的事件有:
“block”事件主要指Volume的mount、unmount、createAsec等。由VolumeManager的handleBlockEvent(evt)来处理,根据多态性最终将会调用AutoVolume或者DirectVolume的handleBlockEvent方法来处理。
“switch”事件主要指Volume的connet、disconnet等。根据相关操作,改变设备参数(设备类型、挂载点等)通过CommandListener告知FrameWork层。
b) FrameWork发出控制命令
与a)相反,CommandListener检测到FrameWork层的命令(MountService发出的命令)调用VolumeManager的函数,VolumeManager找出对应的Volume,调用Volume函数去挂载/卸载操作。而Volume类中的相关操作最终通过调用Linux函数完成。
五、Vold用户态
1. NetlinkManager
NetlinkManager负责与Kernel交互,通过PF_NETLINK来现。
Vlod启动代码如下(/system/vold/main.cpp):
VolumeManager *
CommandListener *
NetlinkManager *
SLOGI("Vold 2.1 (the
revenge) firing up");
mkdir("/dev/block/vold", 0755);
if (!(vm =
VolumeManager::Instance())) {
SLOGE("Unable to create
VolumeManager");
if (!(nm =
NetlinkManager::Instance())) {
SLOGE("Unable to create
NetlinkManager");
CommandListener();
vm-&setBroadcaster((SocketListener *) cl);
nm-&setBroadcaster((SocketListener *)
(vm-&start()) {
SLOGE("Unable to start VolumeManager
(%s)", strerror(errno));
(process_config(vm)) {
SLOGE("Error reading
configuration (%s)... continuing anyways",
strerror(errno));
if (nm-&start())
SLOGE("Unable to start
NetlinkManager (%s)",
strerror(errno));
USE_USB_MODE_SWITCH
SLOGE("Start Misc devices
Manager...");
MiscManager *
if (!(mm =
MiscManager::Instance())) {
SLOGE("Unable to create
MiscManager");
mm-&setBroadcaster((SocketListener *) cl);
(mm-&start()) {
SLOGE("Unable to start MiscManager
(%s)", strerror(errno));
G3Dev* g3 = new
G3Dev(mm);
g3-&handleUsb();
mm-&addMisc(g3);
coldboot("/sys/block"); // 冷启动,vold错过了一些uevent,重新触发。向sysfs的uevent文件写入”add\n”
字符也可以触发sysfs事件,相当执行了一次热插拔。
coldboot("/sys/class/switch");
(cl-&startListener()) {
SLOGE("Unable to start
CommandListener (%s)",
strerror(errno));
// Eventually we'll
become the monitoring thread
sleep(1000);
SLOGI("Vold
exiting");
int main() {
VolumeManager *
CommandListener *
NetlinkManager *
SLOGI("Vold 2.1 (the revenge) firing up");
mkdir("/dev/block/vold", 0755);
if (!(vm = VolumeManager::Instance())) {
SLOGE("Unable to create VolumeManager");
if (!(nm = NetlinkManager::Instance())) {
SLOGE("Unable to create NetlinkManager");
cl = new CommandListener();
vm-&setBroadcaster((SocketListener *) cl);
nm-&setBroadcaster((SocketListener *) cl);
if (vm-&start()) {
SLOGE("Unable to start VolumeManager (%s)", strerror(errno));
if (process_config(vm)) {
SLOGE("Error reading configuration (%s)... continuing anyways", strerror(errno));
if (nm-&start()) {
SLOGE("Unable to start NetlinkManager (%s)", strerror(errno));
#ifdef USE_USB_MODE_SWITCH
SLOGE("Start Misc devices Manager...");
MiscManager *
if (!(mm = MiscManager::Instance())) {
SLOGE("Unable to create MiscManager");
mm-&setBroadcaster((SocketListener *) cl);
if (mm-&start()) {
SLOGE("Unable to start MiscManager (%s)", strerror(errno));
G3Dev* g3 = new G3Dev(mm);
g3-&handleUsb();
mm-&addMisc(g3);
coldboot("/sys/block"); // 冷启动,vold错过了一些uevent,重新触发。向sysfs的uevent文件写入”add\n” 字符也可以触发sysfs事件,相当执行了一次热插拔。
coldboot("/sys/class/switch");
if (cl-&startListener()) {
SLOGE("Unable to start CommandListener (%s)", strerror(errno));
// Eventually we'll become the monitoring thread
while(1) {
sleep(1000);
SLOGI("Vold exiting");
NetlinkManager的家族关系如下所示:
上图中的虚线为启动是的调用流程。
NetlinkManager(在其start函数中创建了NetlinkHandler对象,并把创建的socket作为参数)
(2)class NetlinkHandler: public
NetlinkListener(实现了onEvent)
(3) class NetlinkListener : public SocketListener
(实现了onDataAvailable)
SocketListener(实现了runListener,在一个线程中通过select查看哪些socket有数据,通过调用onDataAvailable来读取数据)
2. NetlinkManager::start()
NetlinkManager::start() {
struct sockaddr_nl
64 * 1024;
memset(&nladdr, 0, sizeof(nladdr));
nladdr.nl_family = AF_NETLINK;
nladdr.nl_pid = getpid();
nladdr.nl_groups = 0
创建一个socket用于内核空间和用户空间的异步通信,监控系统的hotplug事件
if ((mSock
= socket(PF_NETLINK,
SOCK_DGRAM,NETLINK_KOBJECT_UEVENT)) & 0) {
SLOGE("Unable to create
uevent socket: %s",
strerror(errno));
(setsockopt(mSock, SOL_SOCKET, SO_RCVBUFFORCE,
&sz, sizeof(sz)) &
SLOGE("Unable to set uevent socket
SO_RECBUFFORCE option: %s",
strerror(errno));
if (setsockopt(mSock,
SOL_SOCKET, SO_PASSCRED, &on, sizeof(on)) & 0) {
SLOGE("Unable to set
uevent socket SO_PASSCRED option: %s",
strerror(errno));
(bind(mSock, (struct
sockaddr *) &nladdr, sizeof(nladdr)) & 0) {
SLOGE("Unable to bind uevent socket:
%s", strerror(errno));
利用新创建的socket实例化一个NetlinkHandler类对象,NetlinkHandler继承了类NetlinkListener,
NetlinkListener又继承了类SocketListener
mHandler = new
NetlinkHandler(mSock);
(mHandler-&start()) { //启动NetlinkHandler
SLOGE("Unable to start
NetlinkHandler: %s",
strerror(errno));
int NetlinkManager::start() {
struct sockaddr_
int sz = 64 * 1024;
int on = 1;
memset(&nladdr, 0, sizeof(nladdr));
nladdr.nl_family = AF_NETLINK;
nladdr.nl_pid = getpid();
nladdr.nl_groups = 0
// 创建一个socket用于内核空间和用户空间的异步通信,监控系统的hotplug事件
if ((mSock = socket(PF_NETLINK,
SOCK_DGRAM,NETLINK_KOBJECT_UEVENT)) & 0) {
SLOGE("Unable to create uevent socket: %s", strerror(errno));
return -1;
if (setsockopt(mSock, SOL_SOCKET, SO_RCVBUFFORCE, &sz, sizeof(sz)) & 0) {
SLOGE("Unable to set uevent socket SO_RECBUFFORCE option: %s", strerror(errno));
return -1;
if (setsockopt(mSock, SOL_SOCKET, SO_PASSCRED, &on, sizeof(on)) & 0) {
SLOGE("Unable to set uevent socket SO_PASSCRED option: %s", strerror(errno));
return -1;
if (bind(mSock, (struct sockaddr *) &nladdr, sizeof(nladdr)) & 0) {
SLOGE("Unable to bind uevent socket: %s", strerror(errno));
return -1;
// 利用新创建的socket实例化一个NetlinkHandler类对象,NetlinkHandler继承了类NetlinkListener,
// NetlinkListener又继承了类SocketListener
mHandler = new NetlinkHandler(mSock);
if (mHandler-&start()) {
//启动NetlinkHandler
SLOGE("Unable to start NetlinkHandler: %s", strerror(errno));
return -1;
把socket作为参数创建了NetlinkHandler对象,然后启动NetlinkHandler。
NetlinkHandler::start() {
return this-&startListener();
SocketListener::startListener() {
(!mSocketName && mSock == -1) {
SLOGE("Failed to start unbound
listener");
errno = EINVAL;
if (mSocketName)
if ((mSock =
android_get_control_socket(mSocketName)) & 0)
SLOGE("Obtaining file
descriptor socket '%s' failed: %s",
mSocketName, strerror(errno));
(mListen && listen(mSock, 4) & 0)
SLOGE("Unable to listen on socket
(%s)", strerror(errno));
} else if (!mListen)
mClients-&push_back(new SocketClient(mSock, false));
(pipe(mCtrlPipe)) {
SLOGE("pipe failed
(%s)", strerror(errno));
(pthread_create(&mThread, NULL,
SocketListener::threadStart, this)) {
SLOGE("pthread_create
(%s)", strerror(errno));
*SocketListener::threadStart(void *obj) {
SocketListener *me = reinterpret_cast(obj);
me-&runListener();
pthread_exit(NULL);
SocketListener::runListener() {
SocketClientCollection *pendingList = new
SocketClientCollection();
while(1) {
// 死循环,一直监听
SocketClientCollection::iterator
fd_set read_
FD_ZERO(&read_fds); //清空文件描述符集read_fds
if (mListen)
FD_SET(mSock, &read_fds); //添加文件描述符到文件描述符集read_fds
FD_SET(mCtrlPipe[0], &read_fds);
//添加管道的读取端文件描述符到read_fds
if (mCtrlPipe[0] &
max = mCtrlPipe[0];
pthread_mutex_lock(&mClientsLock);
//对容器mClients的操作需要加锁
mClients-&begin(); it != mClients-&end(); ++it)
(*it)-&getSocket();
FD_SET(fd, &read_fds); ////遍历容器mClients的所有成员,调用内联函数getSocket()获取文件描述符,并添加到文件描述符集read_fds
pthread_mutex_unlock(&mClientsLock);
等待文件描述符中某一文件描述符或者说socket有数据到来
if ((rc = select(max +
1, &read_fds, NULL, NULL, NULL)) & 0) {
SLOGE("select failed
(%s)", strerror(errno));
} else if (!rc)
(FD_ISSET(mCtrlPipe[0], &read_fds))
(mListen && FD_ISSET(mSock, &read_fds)) {
//监听套接字处理
struct sockaddr
alen = sizeof(addr);
c = accept(mSock, &addr, &alen); //接收链接请求,建立连接,如果成功c即为建立链接后的数据交换套接字,将其添加到mClient容器
} while (c
& 0 && errno == EINTR);
if (c & 0)
SLOGE("accept failed
(%s)", strerror(errno));
pthread_mutex_lock(&mClientsLock);
mClients-&push_back(new
SocketClient(c, true));
pthread_mutex_unlock(&mClientsLock);
pendingList-&clear();
pthread_mutex_lock(&mClientsLock);
mClients-&begin(); it != mClients-&end(); ++it)
(*it)-&getSocket();
(FD_ISSET(fd, &read_fds)) {
pendingList-&push_back(*it);
pthread_mutex_unlock(&mClientsLock);
(!pendingList-&empty()) { //非监听套接字处理
it = pendingList-&begin();
SocketClient* c = *
pendingList-&erase(it);
onDataAvailable在NetlinkListener中实现*********
(!onDataAvailable(c) && mListen) {
pthread_mutex_lock(&mClientsLock);
mClients-&begin(); it != mClients-&end(); ++it)
if (*it ==
mClients-&erase(it);
pthread_mutex_unlock(&mClientsLock);
c-&decRef();
int NetlinkHandler::start() {
return this-&startListener();
int SocketListener::startListener() {
if (!mSocketName && mSock == -1) {
SLOGE("Failed to start unbound listener");
errno = EINVAL;
return -1;
} else if (mSocketName) {
if ((mSock = android_get_control_socket(mSocketName)) & 0) {
SLOGE("Obtaining file descriptor socket '%s' failed: %s",
mSocketName, strerror(errno));
return -1;
if (mListen && listen(mSock, 4) & 0) {
SLOGE("Unable to listen on socket (%s)", strerror(errno));
return -1;
} else if (!mListen)
mClients-&push_back(new SocketClient(mSock, false));
if (pipe(mCtrlPipe)) {
SLOGE("pipe failed (%s)", strerror(errno));
return -1;
if (pthread_create(&mThread, NULL, SocketListener::threadStart, this)) {
SLOGE("pthread_create (%s)", strerror(errno));
return -1;
void *SocketListener::threadStart(void *obj) {
SocketListener *me = reinterpret_cast(obj);
me-&runListener();
pthread_exit(NULL);
return NULL;
void SocketListener::runListener() {
SocketClientCollection *pendingList = new SocketClientCollection();
while(1) { // 死循环,一直监听
SocketClientCollection::
fd_set read_
int rc = 0;
int max = -1;
FD_ZERO(&read_fds); //清空文件描述符集read_fds
if (mListen) {
FD_SET(mSock, &read_fds); //添加文件描述符到文件描述符集read_fds
FD_SET(mCtrlPipe[0], &read_fds); //添加管道的读取端文件描述符到read_fds
if (mCtrlPipe[0] & max)
max = mCtrlPipe[0];
pthread_mutex_lock(&mClientsLock); //对容器mClients的操作需要加锁
for (it = mClients-&begin(); it != mClients-&end(); ++it) {
int fd = (*it)-&getSocket();
FD_SET(fd, &read_fds); ////遍历容器mClients的所有成员,调用内联函数getSocket()获取文件描述符,并添加到文件描述符集read_fds
if (fd & max)
pthread_mutex_unlock(&mClientsLock);
// 等待文件描述符中某一文件描述符或者说socket有数据到来
if ((rc = select(max + 1, &read_fds, NULL, NULL, NULL)) & 0) {
if (errno == EINTR)
SLOGE("select failed (%s)", strerror(errno));
} else if (!rc)
if (FD_ISSET(mCtrlPipe[0], &read_fds))
if (mListen && FD_ISSET(mSock, &read_fds)) { //监听套接字处理
alen = sizeof(addr);
c = accept(mSock, &addr, &alen); //接收链接请求,建立连接,如果成功c即为建立链接后的数据交换套接字,将其添加到mClient容器
} while (c & 0 && errno == EINTR);
if (c & 0) {
SLOGE("accept failed (%s)", strerror(errno));
pthread_mutex_lock(&mClientsLock);
mClients-&push_back(new SocketClient(c, true));
pthread_mutex_unlock(&mClientsLock);
pendingList-&clear();
pthread_mutex_lock(&mClientsLock);
for (it = mClients-&begin(); it != mClients-&end(); ++it) {
int fd = (*it)-&getSocket();
if (FD_ISSET(fd, &read_fds)) {
pendingList-&push_back(*it);
pthread_mutex_unlock(&mClientsLock);
while (!pendingList-&empty()) { //非监听套接字处理
it = pendingList-&begin();
SocketClient* c = *
pendingList-&erase(it);
// ****** onDataAvailable在NetlinkListener中实现*********
if (!onDataAvailable(c) && mListen) {
pthread_mutex_lock(&mClientsLock);
for (it = mClients-&begin(); it != mClients-&end(); ++it) {
if (*it == c) {
mClients-&erase(it);
pthread_mutex_unlock(&mClientsLock);
c-&decRef();
delete pendingL
SocketListener::runListener是线程真正执行的函数:mListen成员用来判定是否监听套接字,Netlink套接字属于udp套接字,非监听套接字,该函数的主要功能体现在,如果该套接字有数据到来,就调用函数onDataAvailable读取数据。
3. NetlinkListener::onDataAvailable
NetlinkListener::onDataAvailable(SocketClient
socket = cli-&getSocket();
从socket中读取kernel发送来的uevent消息
TEMP_FAILURE_RETRY(uevent_kernel_multicast_recv(socket, mBuffer,
sizeof(mBuffer)));
if (count & 0)
SLOGE("recvmsg failed
(%s)", strerror(errno));
return false;
NetlinkEvent *evt = new NetlinkEvent();
(!evt-&decode(mBuffer, count, mFormat))
SLOGE("Error decoding
NetlinkEvent");
onEvent(evt); //在NetlinkHandler中实现
bool NetlinkListener::onDataAvailable(SocketClient *cli)
int socket = cli-&getSocket();
// 从socket中读取kernel发送来的uevent消息
count = TEMP_FAILURE_RETRY(uevent_kernel_multicast_recv(socket, mBuffer, sizeof(mBuffer)));
if (count & 0) {
SLOGE("recvmsg failed (%s)", strerror(errno));
NetlinkEvent *evt = new NetlinkEvent();
if (!evt-&decode(mBuffer, count, mFormat)) {
SLOGE("Error decoding NetlinkEvent");
onEvent(evt); //在NetlinkHandler中实现
4. NetlinkHandler::onEvent
NetlinkHandler::onEvent(NetlinkEvent *evt)
VolumeManager *vm =
VolumeManager::Instance();
char *subsys =
evt-&getSubsystem();
(!subsys) {
SLOGW("No subsystem found in netlink
(!strcmp(subsys, "block")) {
if(uEventOnOffFlag)
SLOGW("####netlink event
block ####");
evt-&dump();
vm-&handleBlockEvent(evt);
USE_USB_MODE_SWITCH
} else if (!strcmp(subsys, "usb")
|| !strcmp(subsys, "scsi_device")) {
SLOGW("subsystem found in netlink
MiscManager *mm =
MiscManager::Instance();
mm-&handleEvent(evt);
void NetlinkHandler::onEvent(NetlinkEvent *evt) {
VolumeManager *vm = VolumeManager::Instance();
const char *subsys = evt-&getSubsystem();
if (!subsys) {
SLOGW("No subsystem found in netlink event");
if (!strcmp(subsys, "block")) {
if(uEventOnOffFlag)
SLOGW("####netlink event
block ####");
evt-&dump();
vm-&handleBlockEvent(evt);
#ifdef USE_USB_MODE_SWITCH
} else if (!strcmp(subsys, "usb")
|| !strcmp(subsys, "scsi_device")) {
SLOGW("subsystem found in netlink event");
MiscManager *mm = MiscManager::Instance();
mm-&handleEvent(evt);
5. uevent_kernel_multicast_recv
ssize_t uevent_kernel_multicast_recv(int socket, void *buffer, size_t length) {
iovec iov = { buffer, length };
struct sockaddr_nl
control[CMSG_SPACE(sizeof(struct ucred))];
struct msghdr hdr =
sizeof(addr),
sizeof(control),
ssize_t n = recvmsg(socket, &hdr, 0);
(addr.nl_groups == 0 || addr.nl_pid != 0)
struct cmsghdr *cmsg =
CMSG_FIRSTHDR(&hdr);
== NULL || cmsg-&cmsg_type != SCM_CREDENTIALS)
struct ucred *cred =
(struct ucred
*)CMSG_DATA(cmsg);
(cred-&uid != 0) {
bzero(buffer, length);
errno = EIO;
ssize_t uevent_kernel_multicast_recv(int socket, void *buffer, size_t length) {
struct iovec iov = { buffer, length };
struct sockaddr_
char control[CMSG_SPACE(sizeof(struct ucred))];
struct msghdr hdr = {
sizeof(addr),
sizeof(control),
ssize_t n = recvmsg(socket, &hdr, 0);
if (n &= 0) {
if (addr.nl_groups == 0 || addr.nl_pid != 0) {
struct cmsghdr *cmsg = CMSG_FIRSTHDR(&hdr);
if (cmsg == NULL || cmsg-&cmsg_type != SCM_CREDENTIALS) {
struct ucred *cred = (struct ucred *)CMSG_DATA(cmsg);
if (cred-&uid != 0) {
bzero(buffer, length);
errno = EIO;
return -1;
六、与Vold相关的Kernel态
用户态创建的netlink
sock被kernel保存在:nl_table[sk-&sk_protocol].mc_list
Kernel态创建的netlink
sock被kernel保存在:uevent_sock_list,上面的sk-&sk_protocol为uevent_sock_list的协议,
二者只有协议一致才可以发送。
1. 创建kernel态sock
在用户态的socket创建方式(/system/vold/NetlinkManager.cpp):
if ((mSock
= socket(PF_NETLINK,
SOCK_DGRAM,NETLINK_KOBJECT_UEVENT)) & 0) {
SLOGE("Unable to create
uevent socket: %s",
strerror(errno));
if ((mSock = socket(PF_NETLINK,
SOCK_DGRAM,NETLINK_KOBJECT_UEVENT)) & 0) {
SLOGE("Unable to create uevent socket: %s", strerror(errno));
return -1;
在Kernel的socket创建方式(/kernel/lib/kobject_uevent.c):
uevent_net_init(struct
uevent_sock *ue_
ue_sk = kzalloc(sizeof(*ue_sk),
GFP_KERNEL);
ue_sk-&sk = netlink_kernel_create(net,
NETLINK_KOBJECT_UEVENT,
1, NULL, NULL, THIS_MODULE);
(!ue_sk-&sk) {
printk(KERN_ERR
"kobject_uevent: unable
to create netlink socket!\n");
kfree(ue_sk);
mutex_lock(&uevent_sock_mutex);
list_add_tail(&ue_sk-&list,
&uevent_sock_list);
mutex_unlock(&uevent_sock_mutex);
static int uevent_net_init(struct net *net)
struct uevent_sock *ue_
ue_sk = kzalloc(sizeof(*ue_sk), GFP_KERNEL);
if (!ue_sk)
return -ENOMEM;
ue_sk-&sk = netlink_kernel_create(net, NETLINK_KOBJECT_UEVENT,
1, NULL, NULL, THIS_MODULE);
if (!ue_sk-&sk) {
printk(KERN_ERR
"kobject_uevent: unable to create netlink socket!\n");
kfree(ue_sk);
return -ENODEV;
mutex_lock(&uevent_sock_mutex);
list_add_tail(&ue_sk-&list, &uevent_sock_list);
mutex_unlock(&uevent_sock_mutex);
从上面的代码可知,此sock被创建之后,被增加到全局变量uevent_sock_list列表中,下面的分析围绕此列表进行。
netlink_kernel_create函数原型:
sock *netlink_kernel_create(struct net *net, int unit, unsigned int groups,
(*input)(struct
sk_buff *skb),
mutex *cb_mutex, struct
module *module)
struct sock *netlink_kernel_create(struct net *net, int unit, unsigned int groups,
void (*input)(struct sk_buff *skb),
struct mutex *cb_mutex, struct module *module)
1) struct net
*net:是一个网络名字空间namespace,在不同的名字空间里面可以有自己的转发信息库,有自己的一套net_device等等。默认情况下都是使用init_net这个全局变量
2) int unit: 表示netlink协议类型,如
NETLINK_KOBJECT_UEVENT
3) unsigned int groups:
4) void (*input)(struct sk_buff
*skb):参数input则为内核模块定义的netlink消息处理函数,当有消息到达这个netlink
socket时,该input函数指针就会被调用。函数指针input的参数skb实际上就是函数netlink_kernel_create返回的
sock指针,sock实际是socket的一个内核表示数据结构,用户态应用创建的socket在内核中也会有一个struct
sock结构来表示。
5) struct mutex *cb_mutex:
6) struct module *module:
一般为THIS_MODULE
struct sock
用户态socket在kernel中的表示。
2. 相关数据结构
相关数据结构如下图所示:
3. 发送消息给用户空间
3.1 发送消息流程图
3.2 kobject_uevent_env
kobject_uevent_env(struct
kobject *kobj, enum
kobject_action action,
*envp_ext[])
struct kobj_uevent_env
char *action_string =
kobject_actions[action];
const char *devpath = NULL;
struct kobject
const struct kset_uevent_ops
retval = 0;
#ifdef CONFIG_NET
uevent_sock *ue_
pr_debug("kobject: '%s' (%p):
kobject_name(kobj), kobj,
__func__);
top_kobj =
(!top_kobj-&kset &&
top_kobj-&parent)
top_kobj = top_kobj-&
(!top_kobj-&kset) {
pr_debug("kobject: '%s'
(%p): %s: attempted to send uevent "
"without kset!\n",
kobject_name(kobj), kobj,
__func__);
kset = top_kobj-&
uevent_ops = kset-&uevent_
(kobj-&uevent_suppress) {
pr_debug("kobject: '%s' (%p): %s:
uevent_suppress "
"caused the event to
kobject_name(kobj), kobj, __func__);
if (uevent_ops
&& uevent_ops-&filter)
(!uevent_ops-&filter(kset, kobj)) {
pr_debug("kobject: '%s' (%p): %s:
filter function "
"caused the event to
kobject_name(kobj), kobj, __func__);
(uevent_ops &&
uevent_ops-&name)
subsystem = uevent_ops-&name(kset, kobj);
subsystem = kobject_name(&kset-&kobj);
(!subsystem) {
pr_debug("kobject: '%s' (%p): %s:
unset subsystem caused the "
drop!\n", kobject_name(kobj), kobj,
__func__);
env = kzalloc(sizeof(struct kobj_uevent_env),
GFP_KERNEL);
devpath = kobject_get_path(kobj, GFP_KERNEL);
(!devpath) {
retval = -ENOENT;
retval = add_uevent_var(env, "ACTION=%s",
action_string);
retval = add_uevent_var(env, "DEVPATH=%s", devpath);
retval = add_uevent_var(env, "SUBSYSTEM=%s",
subsystem);
if (envp_ext)
0; envp_ext[i]; i++) {
retval = add_uevent_var(env, "%s", envp_ext[i]);
(uevent_ops && uevent_ops-&uevent)
retval = uevent_ops-&uevent(kset, kobj,
(retval) {
pr_debug("kobject: '%s' (%p): %s:
uevent() returned "
kobject_name(kobj), kobj,
__func__, retval);
if (action
== KOBJ_ADD)
kobj-&state_add_uevent_sent = 1;
if (action ==
KOBJ_REMOVE)
kobj-&state_remove_uevent_sent = 1;
spin_lock(&sequence_lock);
seq = ++uevent_
spin_unlock(&sequence_lock);
retval = add_uevent_var(env, "SEQNUM=%llu", (unsigned long long)seq);
#if defined(CONFIG_NET)
mutex_lock(&uevent_sock_mutex);
list_for_each_entry(ue_sk,
&uevent_sock_list, list) {
struct sock
*uevent_sock = ue_sk-&
len = strlen(action_string) + strlen(devpath)
skb = alloc_skb(len + env-&buflen,
GFP_KERNEL);
scratch = skb_put(skb, len);
sprintf(scratch, "%s@%s", action_string, devpath);
//action_string+devpath
0; i & env-&envp_ i++) {
len = strlen(env-&envp[i]) + 1;
scratch = skb_put(skb, len);
strcpy(scratch, env-&envp[i]);
NETLINK_CB(skb).dst_group = 1;
retval = netlink_broadcast_filtered(uevent_sock,
0, 1, GFP_KERNEL,
kobj_bcast_filter,
if (retval
== -ENOBUFS)
retval = 0;
retval = -ENOMEM;
mutex_unlock(&uevent_sock_mutex);
if (uevent_helper[0]
&& !kobj_usermode_filter(kobj)) {
*argv [3];
argv [0] = uevent_
argv [1] = (char
argv [2] = NULL;
retval = add_uevent_var(env, "HOME=/");
retval = add_uevent_var(env,
"PATH=/sbin:/bin:/usr/sbin:/usr/bin");
retval = call_usermodehelper(argv[0], argv,
env-&envp, UMH_WAIT_EXEC);
kfree(devpath);
kfree(env);
int kobject_uevent_env(struct kobject *kobj, enum kobject_action action,
char *envp_ext[])
struct kobj_uevent_env *
const char *action_string = kobject_actions[action];
const char *devpath = NULL;
const char *
struct kobject *top_
struct kset *
const struct kset_uevent_ops *uevent_
int i = 0;
int retval = 0;
#ifdef CONFIG_NET
struct uevent_sock *ue_
pr_debug("kobject: '%s' (%p): %s\n",
kobject_name(kobj), kobj, __func__);
top_kobj =
while (!top_kobj-&kset && top_kobj-&parent)
top_kobj = top_kobj-&
if (!top_kobj-&kset) {
pr_debug("kobject: '%s' (%p): %s: attempted to send uevent "
"without kset!\n", kobject_name(kobj), kobj,
__func__);
return -EINVAL;
kset = top_kobj-&
uevent_ops = kset-&uevent_
if (kobj-&uevent_suppress) {
pr_debug("kobject: '%s' (%p): %s: uevent_suppress "
"caused the event to drop!\n",
kobject_name(kobj), kobj, __func__);
if (uevent_ops && uevent_ops-&filter)
if (!uevent_ops-&filter(kset, kobj)) {
pr_debug("kobject: '%s' (%p): %s: filter function "
"caused the event to drop!\n",
kobject_name(kobj), kobj, __func__);
if (uevent_ops && uevent_ops-&name)
subsystem = uevent_ops-&name(kset, kobj);
subsystem = kobject_name(&kset-&kobj);
if (!subsystem) {
pr_debug("kobject: '%s' (%p): %s: unset subsystem caused the "
"event to drop!\n", kobject_name(kobj), kobj,
__func__);
env = kzalloc(sizeof(struct kobj_uevent_env), GFP_KERNEL);
return -ENOMEM;
devpath = kobject_get_path(kobj, GFP_KERNEL);
if (!devpath) {
retval = -ENOENT;
retval = add_uevent_var(env, "ACTION=%s", action_string);
if (retval)
retval = add_uevent_var(env, "DEVPATH=%s", devpath);
if (retval)
retval = add_uevent_var(env, "SUBSYSTEM=%s", subsystem);
if (retval)
if (envp_ext) {
for (i = 0; envp_ext[i]; i++) {
retval = add_uevent_var(env, "%s", envp_ext[i]);
if (retval)
if (uevent_ops && uevent_ops-&uevent) {
retval = uevent_ops-&uevent(kset, kobj, env);
if (retval) {
pr_debug("kobject: '%s' (%p): %s: uevent() returned "
"%d\n", kobject_name(kobj), kobj,
__func__, retval);
if (action == KOBJ_ADD)
kobj-&state_add_uevent_sent = 1;
else if (action == KOBJ_REMOVE)
kobj-&state_remove_uevent_sent = 1;
spin_lock(&sequence_lock);
seq = ++uevent_
spin_unlock(&sequence_lock);
retval = add_uevent_var(env, "SEQNUM=%llu", (unsigned long long)seq);
if (retval)
#if defined(CONFIG_NET)
mutex_lock(&uevent_sock_mutex);
list_for_each_entry(ue_sk, &uevent_sock_list, list) {
struct sock *uevent_sock = ue_sk-&
struct sk_buff *
len = strlen(action_string) + strlen(devpath) + 2;
skb = alloc_skb(len + env-&buflen, GFP_KERNEL);
if (skb) {
scratch = skb_put(skb, len);
sprintf(scratch, "%s@%s", action_string, devpath); //action_string+devpath
for (i = 0; i & env-&envp_ i++) {
len = strlen(env-&envp[i]) + 1;
scratch = skb_put(skb, len);
strcpy(scratch, env-&envp[i]);
NETLINK_CB(skb).dst_group = 1;
retval = netlink_broadcast_filtered(uevent_sock, skb,
0, 1, GFP_KERNEL,
kobj_bcast_filter,
if (retval == -ENOBUFS)
retval = 0;
retval = -ENOMEM;
mutex_unlock(&uevent_sock_mutex);
if (uevent_helper[0] && !kobj_usermode_filter(kobj)) {
char *argv [3];
argv [0] = uevent_
argv [1] = (char *)
argv [2] = NULL;
retval = add_uevent_var(env, "HOME=/");
if (retval)
retval = add_uevent_var(env,
"PATH=/sbin:/bin:/usr/sbin:/usr/bin");
if (retval)
retval = call_usermodehelper(argv[0], argv,
env-&envp, UMH_WAIT_EXEC);
kfree(devpath);
kfree(env);
kobject_uevent(struct
kobject *kobj, enum
kobject_action action)
kobject_uevent_env(kobj, action, NULL);
int kobject_uevent(struct kobject *kobj, enum kobject_action action)
return kobject_uevent_env(kobj, action, NULL);
3.3 netlink_broadcast_filtered
netlink_broadcast_filtered(struct sock *ssk, struct sk_buff *skb, u32
u32 group, gfp_t allocation,
(*filter)(struct
sock *dsk, struct
sk_buff *skb, void
*filter_data)
struct net *net =
sock_net(ssk);
netlink_broadcast_
struct hlist_node
skb = netlink_trim(skb,
allocation);
info.exclude_sk =
info.net =
info.pid =
info.group =
info.failure = 0;
info.delivery_failure = 0;
info.congested = 0;
info.delivered = 0;
info.allocation =
info.skb =
info.skb2 = NULL;
info.tx_filter =
info.tx_data = filter_
netlink_lock_table();
向nl_table[ssk-&sk_protocol].mc_list中的每个sock发送此netlink消息
sk_for_each_bound(sk, node,
&nl_table[ssk-&sk_protocol].mc_list)
do_one_broadcast(sk, &info);
consume_skb(skb);
netlink_unlock_table();
(info.delivery_failure) {
kfree_skb(info.skb2);
consume_skb(info.skb2);
if (info.delivered)
(info.congested && (allocation &
__GFP_WAIT))
int netlink_broadcast_filtered(struct sock *ssk, struct sk_buff *skb, u32 pid,
u32 group, gfp_t allocation,
int (*filter)(struct sock *dsk, struct sk_buff *skb, void *data),
void *filter_data)
struct net *net = sock_net(ssk);
struct netlink_broadcast_
struct hlist_node *
struct sock *
skb = netlink_trim(skb, allocation);
info.exclude_sk =
info.net =
info.pid =
info.group =
info.failure = 0;
info.delivery_failure = 0;
info.congested = 0;
info.delivered = 0;
info.allocation =
info.skb =
info.skb2 = NULL;
info.tx_filter =
info.tx_data = filter_
netlink_lock_table();
// 向nl_table[ssk-&sk_protocol].mc_list中的每个sock发送此netlink消息
sk_for_each_bound(sk, node, &nl_table[ssk-&sk_protocol].mc_list)
do_one_broadcast(sk, &info);
consume_skb(skb);
netlink_unlock_table();
if (info.delivery_failure) {
kfree_skb(info.skb2);
return -ENOBUFS;
consume_skb(info.skb2);
if (info.delivered) {
if (info.congested && (allocation & __GFP_WAIT))
return -ESRCH;
static struct netlink_table
*nl_table;是全局变量,它维护了用户态创建的所有netlink
sock,按协议分类,每种协议一个链表mc_list。它在函数netlink_proto_init中被初始化,向nl_table[sk-&sk_protocol].mc_list中增加sock的调用流程如下(kernel/net/netlink/af_netlink.c):
3.4 do_one_broadcast
inline int do_one_broadcast(struct sock *sk,
netlink_broadcast_data *p)
struct netlink_sock
*nlk = nlk_sk(sk);
(p-&exclude_sk == sk)
if (nlk-&pid ==
p-&pid || p-&group - 1 &= nlk-&ngroups
!test_bit(p-&group - 1,
nlk-&groups))
(!net_eq(sock_net(sk), p-&net))
(p-&failure) {
netlink_overrun(sk);
sock_hold(sk);
(p-&skb2 == NULL) {
(skb_shared(p-&skb)) {
p-&skb2 = skb_clone(p-&skb,
p-&allocation);
p-&skb2 = skb_get(p-&skb);
skb_orphan(p-&skb2);
(p-&skb2 == NULL) {
netlink_overrun(sk);
p-&failure = 1;
(nlk-&flags &
NETLINK_BROADCAST_SEND_ERROR)
p-&delivery_failure = 1;
if (p-&tx_filter &&
p-&tx_filter(sk, p-&skb2, p-&tx_data))
kfree_skb(p-&skb2);
p-&skb2 = NULL;
} else if (sk_filter(sk, p-&skb2))
kfree_skb(p-&skb2);
p-&skb2 = NULL;
if ((val =
netlink_broadcast_deliver(sk, p-&skb2)) & 0)
netlink_overrun(sk);
(nlk-&flags &
NETLINK_BROADCAST_SEND_ERROR)
p-&delivery_failure = 1;
p-&congested |=
p-&delivered = 1;
p-&skb2 = NULL;
sock_put(sk);
static inline int do_one_broadcast(struct sock *sk,
struct netlink_broadcast_data *p)
struct netlink_sock *nlk = nlk_sk(sk);
if (p-&exclude_sk == sk)
if (nlk-&pid == p-&pid || p-&group - 1 &= nlk-&ngroups ||
!test_bit(p-&group - 1, nlk-&groups))
if (!net_eq(sock_net(sk), p-&net))
if (p-&failure) {
netlink_overrun(sk);
sock_hold(sk);
if (p-&skb2 == NULL) {
if (skb_shared(p-&skb)) {
p-&skb2 = skb_clone(p-&skb, p-&allocation);
p-&skb2 = skb_get(p-&skb);
skb_orphan(p-&skb2);
if (p-&skb2 == NULL) {
netlink_overrun(sk);
p-&failure = 1;
if (nlk-&flags & NETLINK_BROADCAST_SEND_ERROR)
p-&delivery_failure = 1;
} else if (p-&tx_filter && p-&tx_filter(sk, p-&skb2, p-&tx_data)) {
kfree_skb(p-&skb2);
p-&skb2 = NULL;
} else if (sk_filter(sk, p-&skb2)) {
kfree_skb(p-&skb2);
p-&skb2 = NULL;
} else if ((val = netlink_broadcast_deliver(sk, p-&skb2)) & 0) {
netlink_overrun(sk);
if (nlk-&flags & NETLINK_BROADCAST_SEND_ERROR)
p-&delivery_failure = 1;
p-&congested |=
p-&delivered = 1;
p-&skb2 = NULL;
sock_put(sk);
3.5 netlink_broadcast_deliver
inline int
netlink_broadcast_deliver(struct sock *sk,
struct sk_buff
struct netlink_sock
*nlk = nlk_sk(sk);
(atomic_read(&sk-&sk_rmem_alloc) &=
sk-&sk_rcvbuf &&
!test_bit(0, &nlk-&state))
skb_set_owner_r(skb, sk);
skb_queue_tail(&sk-&sk_receive_queue,
sk-&sk_data_ready(sk, skb-&len);
atomic_read(&sk-&sk_rmem_alloc) &
static inline int netlink_broadcast_deliver(struct sock *sk,
struct sk_buff *skb)
struct netlink_sock *nlk = nlk_sk(sk);
if (atomic_read(&sk-&sk_rmem_alloc) &= sk-&sk_rcvbuf &&
!test_bit(0, &nlk-&state)) {
skb_set_owner_r(skb, sk);
skb_queue_tail(&sk-&sk_receive_queue, skb);
sk-&sk_data_ready(sk, skb-&len);
return atomic_read(&sk-&sk_rmem_alloc) & sk-&sk_
return -1;
参考文献:
已投稿到:
以上网友发言只代表其个人观点,不代表新浪网的观点或立场。}

我要回帖

更多关于 switch支持热插拔吗 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信