0%

Android-binder-c++层

这里主要是native端的流程

主要涉及到的目录是

  1. android / platform / frameworks / native / master / . / libs
  2. android / platform / frameworks / base / master / . / core / jni

    Native端 ServiceManager 启动过程

原本列了一大串,后来发现详细内容还不如直接看这个把。
Binder系列3—启动ServiceManager

ServiceManager

提炼出关键步骤就是

  1. binder_open(driver, 128*1024) ,内部调用为 :

    1. 打开/dev/binder文件:bs->fd = open("/dev/binder", O_RDWR);,这个方法会进入到binder驱动程序,保存线程上下文信息,生成多个红黑树,用于保存服务端binder实体信息,客户端binder引用信息等;
    2. 记下映射内存大小:bs->mapsize = mapsize;
    3. 建立128K内存映射:bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);,这个方法也会进入到binder驱动程序,使用进程虚拟地址空间和内核虚拟地址空间来映射同一个物理页面。这样,进程和内核之间就可以减少一次内存拷贝了,提到了进程间通信效率。举个例子如,Client要将一块内存数据传递给Server,一般的做法是,Client将这块数据从它的进程空间拷贝到内核空间中,然后内核再将这个数据从内核空间拷贝到Server的进程空间,这样,Server就可以访问这个数据了。但是在这种方法中,执行了两次内存拷贝操作,而采用我们上面提到的方法,只需要把Client进程空间的数据拷贝一次到内核空间,然后Server与内核共享这个数据就可以了,整个过程只需要执行一次内存拷贝,提高了效率。

    这里用到一个数据结构 binder_state 把它们存起来

    1
    2
    3
    4
    5
    6
    struct binder_state
    {
    int fd; //驱动的文件描述符
    void *mapped; //映射内存的起始地址
    unsigned mapsize; //映射内存的大小
    };
  2. 通知Binder驱动程序它是守护进程:

    1
    2
    3
    4
    int binder_become_context_manager(struct binder_state *bs)
    {
    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
    }

    这里通过调用ioctl文件操作函数来通知Binder驱动程序自己是守护进程,cmd是BINDER_SET_CONTEXT_MGR,没有参数。在驱动程序内部的调用为:

    1. 初始化binder_context_mgr_uidcurrent->cred->euidbinder_context_mgr_uid表示Service Manager守护进程的uid,这样使当前线程成为Binder机制的守护进程
    2. 通过binder_new_node()来创建binder实体,binder_context_mgr_node用来表示Service Manager的binder实体
  3. 进入循环等待请求的到来:binder_loop(bs, svcmgr_handler),并且使用 svcmgr_handler 函数来处理binder请求,没有请求时,
    binder_ioctl()函数中通过wait_event_interruptible_exclusive()阻塞

Native端获取 ServiceManager 远程接口的过程

Binder系列4—获取ServiceManager

Service Manager在Binder机制中既充当守护进程的角色,同时它也充当着Server角色,然而它又与一般的Server不一样。

对于普通的Server来说,Client如果想要获得Server的远程接口,那么必须通过Service Manager远程接口提供的getService接口来获得,getService是一个使用Binder机制来进行进程间通信的过程(需要通过名字查询得到相应的Server端binder实体对应的binder引用句柄,用于生成BpBinder);

而对于Service Manager这个Server来说,Client如果想要获得Service Manager远程接口,却不必通过进程间通信机制来获得,因为Service Manager远程接口是一个特殊的Binder引用,它的引用句柄一定是0。

获取Service Manager远程接口的函数是

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
sp<IServiceManager> defaultServiceManager()
{

if (gDefaultServiceManager != NULL) return gDefaultServiceManager;

{
AutoMutex _l(gDefaultServiceManagerLock);
if (gDefaultServiceManager == NULL) {
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
}
}

return gDefaultServiceManager;
}

一个相关的类继承关系图:
关于BpServiceManager
从图中可以看到:

  1. BpServiceManager类继承了BpInterface类,BpInterface是个模板类,又继承了IServiceManager和BpRefBase,它的构造函数需要一个IBinder类

    1
    2
    3
    4
    5
    6
    7
    8
    9
    template<typename INTERFACE>
    class BpInterface : public INTERFACE, public BpRefBase
    {
    public:
    BpInterface(const sp<IBinder>& remote);

    protected:
    virtual IBinder* onAsBinder();
    };
  2. IServiceManager类继承了IInterface类,而IInterface类和BpRefBase类又继承了RefBase类。在BpRefBase类中,有一个成员变量mRemote,它的类型是IBinder*,实现类为BpBinder,它表示一个Binder引用,引用句柄值保存在BpBinder类的mHandle成员变量中。因此IServiceManager也有mRemote的指针,可以和binder通信

创建Service Manager远程接口主要是下面语句,主要是三个步骤:

1
gDefaultServiceManager = interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL));
  1. 首先是ProcessState::self():

    1
    2
    3
    4
    5
    6
    7
    8
    sp<ProcessState> ProcessState::self()
    {
    if (gProcess != NULL) return gProcess;

    AutoMutex _l(gProcessMutex);
    if (gProcess == NULL) gProcess = new ProcessState;
    return gProcess;
    }

    这里仅仅是创建一个单例,它的构造函数

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    ProcessState::ProcessState()
    : mDriverFD(open_driver())
    , mVMStart(MAP_FAILED)
    , mManagesContexts(false)
    , mBinderContextCheckFunc(NULL)
    , mBinderContextUserData(NULL)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
    {
    if (mDriverFD >= 0) {
    // XXX Ideally, there should be a specific define for whether we
    // have mmap (or whether we could possibly have the kernel module
    // availabla).
    #if !defined(HAVE_WIN32_IPC)
    // mmap the binder, providing a chunk of virtual address space to receive transactions.
    mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
    if (mVMStart == MAP_FAILED) {
    // *sigh*
    LOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
    close(mDriverFD);
    mDriverFD = -1;
    }
    #else
    mDriverFD = -1;
    #endif
    }
    if (mDriverFD < 0) {
    // Need to run without the driver, starting our own thread pool.
    }
    }

    主要做了三件事:

    1. 调用open(),打开/dev/binder驱动设备;
    2. 再利用mmap(),创建大小为BINDER_VM_SIZE(1M-8K)的内存地址空间;
    3. 设定当前进程最大的最大并发Binder线程个数为16。
  2. ProcessState::self()->getContextObject(NULL),这个函数的返回值,是一个句柄值为0的Binder引用,即BpBinder:new BpBinder(0)

  3. interface_cast<IServiceManager>()函数,这是一个模板函数,最终调用到了IServiceManager::asInterface():

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    template<typename INTERFACE>
    inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
    {
    return INTERFACE::asInterface(obj);
    }

    android::sp<IServiceManager> IServiceManager::asInterface(const android::sp<android::IBinder>& obj)
    {
    android::sp<IServiceManager> intr;
    if (obj != NULL) {
    intr = static_cast<IServiceManager*>(
    obj->queryLocalInterface(IServiceManager::descriptor).get());
    if (intr == NULL) {
    intr = new BpServiceManager(obj);
    }

    return intr;
    }

    因此实际的过程为:

1
gDefaultServiceManager = new BpServiceManager(new BpBinder(0));

即获取的Service Manager远程接口,本质上是一个BpServiceManager,包含了一个句柄值为0的Binder引用,这个过程不涉及到跨进程调用

Native端普通 Service 的初始化,注册

在上一节里面我们看到了 ServiceManager 的远程接口端的类图,实际上是一个BpServiceManager。这里我们以MediaPlayerService为例,看一下服务端的类图

MediaPlayerService

可以看到,这个结构和Bp端很类似,不同的地方在于,MediaPlayerService继承于BnMediaPlayerService,而BnMediaPlayerService继承于BnInterfaceBnInterface继承了BBinder接口, IBinder 的实现类则是BBinder其他部分则是类似的

1
2
3
4
5
6
7
8
9
10
template<typename INTERFACE>
class BnInterface : public INTERFACE, public BBinder
{
public:
virtual sp<IInterface> queryLocalInterface(const String16& _descriptor);
virtual const String16& getInterfaceDescriptor() const;

protected:
virtual IBinder* onAsBinder();
};

MediaPlayerService的启动过程:

1
2
3
4
5
6
7
8
9
10
11
12
int main(int argc, char** argv)
{
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager();
LOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate();
MediaPlayerService::instantiate();
CameraService::instantiate();
AudioPolicyService::instantiate();
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}

主要步骤是:

  1. sp<ProcessState> proc(ProcessState::self()); , 在上一节已经分析过这句过程。主要是打开binder设备和映射内存

  2. sp<IServiceManager> sm = defaultServiceManager(),获取ServiceManager接口,上一节也已经分析过

  3. MediaPlayerService::instantiate()

    1
    2
    3
    4
    void MediaPlayerService::instantiate() {
    defaultServiceManager()->addService(
    String16("media.player"), new MediaPlayerService());
    }

    addService函数传入了两个参数,一个是服务的名字,一个是服务的实现类。这里首先看一下defaultServiceManager返回的BpServiceManager定义:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    class BpServiceManager : public BpInterface<IServiceManager>
    {
    public:
    BpServiceManager(const sp<IBinder>& impl)
    : BpInterface<IServiceManager>(impl)
    {
    }

    ......

    virtual status_t addService(const String16& name, const sp<IBinder>& service)
    {
    Parcel data, reply;
    //IServiceManager::getInterfaceDescriptor()返回来的是一个字符串,即"android.os.IServiceManager",写入一个字符串
    data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
    // name 即 "media.player",写入一个字符串
    data.writeString16(name);
    data.writeStrongBinder(service);
    status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
    return err == NO_ERROR ? reply.readExceptionCode()
    }
    ......

    };

    status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
    {
    return flatten_binder(ProcessState::self(), val, this);
    }

    这里 flatten_binder(ProcessState::self(), val, this)会把传入进来的 IBinder实现类service转成一个flat_binder_object对象,然后序列化到Parcel 里面 。 每一个Binder实体或者引用,都通过 flat_binder_object 来表示,成员变量里面 binder表示这是一个Binder实体,handle表示这是一个Binder引用,当这是一个Binder实体时,cookie才有意义,表示附加数据,由进程自己解释。:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    status_t flatten_binder(const sp<ProcessState>& proc,const sp<IBinder>& binder, Parcel* out)
    {
    flat_binder_object obj;

    obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    if (binder != NULL) {
    IBinder *local = binder->localBinder();
    if (!local) {
    BpBinder *proxy = binder->remoteBinder();
    if (proxy == NULL) {
    LOGE("null proxy");
    }
    const int32_t handle = proxy ? proxy->handle() : 0;
    obj.type = BINDER_TYPE_HANDLE;
    obj.handle = handle;
    obj.cookie = NULL;
    } else { //此次会进入到这里,因为是BBinder,服务实体
    obj.type = BINDER_TYPE_BINDER;
    obj.binder = local->getWeakRefs();
    obj.cookie = local;
    }
    } else {
    obj.type = BINDER_TYPE_BINDER;
    obj.binder = NULL;
    obj.cookie = NULL;
    }

    return finish_flatten_binder(binder, obj, out); //序列化,把flat_binder_object写入out。
    }

    writeStrongBinder的过程,在这里就是,把parcel data 转成 flat_binder_object ,然后写入到parcel out。

    回到上面去,writeStrongBinder结束后,就开始调用status_t err = remote()->tra nsact(ADD_SERVICE_TRANSACTION, data, &reply);,

    因为这里是BpServiceManager,所以remote()成员函数来自于BpRefBase类,它返回一个BpBinder指针,也就是BpBinder(0)。后面是一连串的调用链:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    // android / platform / frameworks / native / master / . / libs / binder / BpBinder.cpp
    status_t BpBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
    {
    ...
    //IPCThreadState::self() 里面会初始化自己的成员变量mIn,mOut
    status_t status = IPCThreadState::self()->transact(mHandle, code, data, reply, flags);
    ...
    }

    status_t IPCThreadState::transact(int32_t handle,
    uint32_t code, const Parcel& data,
    Parcel* reply, uint32_t flags)
    {
    flags |= TF_ACCEPT_FDS;
    . . . . . .
    // 把data数据整理进内部的mOut包中
    err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    . . . . . .
    if ((flags & TF_ONE_WAY) == 0)
    {
    . . . . . .
    if (reply)
    {
    err = waitForResponse(reply);
    }
    else
    {
    Parcel fakeReply;
    err = waitForResponse(&fakeReply);
    }
    . . . . . .
    }
    else
    {
    //oneway,则不需要等待reply的场景
    err = waitForResponse(NULL, NULL);
    }

    return err;
    }

    status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
    {
    //把数据封装成 binder_transaction_data
    binder_transaction_data tr;
    tr.target.handle = handle; // handle = 0
    tr.code = code; // code = ADD_SERVICE_TRANSACTION
    tr.flags = binderFlags;
    // data为记录Media服务信息的Parcel对象
    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
    tr.data_size = data.ipcDataSize();
    tr.data.ptr.buffer = data.ipcData();
    tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
    tr.data.ptr.offsets = data.ipcObjects();
    }
    ....
    mOut.writeInt32(cmd); //cmd = BC_TRANSACTION
    mOut.write(&tr, sizeof(tr)); //写入 binder_transaction_data 数据
    return NO_ERROR;
    }

    //在waitForResponse过程, 首先回复之前的ADD_SERVICE信息,执行BR_TRANSACTION_COMPLETE;另外,在目标进程收到事务后,处理BR_TRANSACTION事务。 然后发送给当前进程,再执行BR_REPLY命令。
    status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
    {
    ...
    // talkWithDriver()内部会完成跨进程事务
    if ((err=talkWithDriver()) < NO_ERROR) break;
    ...
    // 事务的回复信息被记录在mIn中,所以需要进一步分析这个
    cmd = mIn.readInt32();
    switch (cmd) {
    case BR_TRANSACTION_COMPLETE:
    if (!reply && !acquireResult) goto finish;
    break;
    ...
    }

    status_t IPCThreadState::talkWithDriver(bool doReceive)
    {
    //把mOut数据和mIn的数据处理后构造一个binder_write_read对象
    binder_write_read bwr;
    bwr.write_size = outAvail;
    bwr.write_buffer = (long unsigned int)mOut.data();
    // This is what we'll read.
    if (doReceive && needRead) {
    //接收数据缓冲区信息的填充。如果以后收到数据,就直接填在mIn中了。
    bwr.read_size = mIn.dataCapacity();
    bwr.read_buffer = (long unsigned int)mIn.data();
    } else {
    bwr.read_size = 0;
    }
    ...
    do {
    if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)

    // 这里设置收到的回复数据
    if (bwr.read_consumed > 0) {
    mIn.setDataSize(bwr.read_consumed);
    mIn.setDataPosition(0);
    }
    } while (err == -EINTR); //当被中断,则继续执行
    ...
    }

    短暂的总结一下流程:
    binder_16


    这里开始是 binder驱动内容,其实只需要知道这个talkWithDriver()结果是 mIn 得到数据,
    waitForResponse()reply->ipcSetDataReferenc()设置返回数据即可。///驱动这里我也好多没看懂。。。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
//内核驱动程序,
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
struct binder_proc *proc = filp->private_data;
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;

//这里已经是驱动内容了,将用户空间bwr结构体拷贝到内核空间
copy_from_user(&bwr, ubuf, sizeof(bwr));
...
switch (cmd) {
case BINDER_WRITE_READ: {
...
if (bwr.write_size > 0) {
//写数据
ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
...
}
if (bwr.read_size > 0) {
//读取自己队列的数据
ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
...
}
...
break;
}
//将内核空间bwr结构体拷贝到用户空间
copy_to_user(ubuf, &bwr, sizeof(bwr));
......
}

binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,void __user *buffer, int size, signed long *consumed)
{
...
uint32_t cmd;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
//拷贝用户空间的cmd命令,此时为BC_TRANSACTION
if (get_user(cmd, (uint32_t __user *)ptr)) -EFAULT;
ptr += sizeof(uint32_t);
switch (cmd) {
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
//拷贝用户空间的binder_transaction_data,复制数据
if (copy_from_user(&tr, ptr, sizeof(tr))) return -EFAULT;
ptr += sizeof(tr);
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
break;
}
...
}
*consumed = ptr - buffer;
}
return 0;
}

//binder_transaction函数主要负责的工作:
//新建binder_transaction对象,并插入到自己的binder_transaction堆栈中
//新建binder_work对象,插入到目标进程的todo队列
//Binder与Handle的转换 (flat_binder_object)

static void binder_transaction(struct binder_proc *proc, struct binder_thread *thread,struct binder_transaction_data *tr, int reply)
{
......
//根据handle找到node,这里handle是0,因此找到binder_context_mgr_node
if (tr->target.handle) {
......
} else {
target_node = binder_context_mgr_node;
}
if (target_thread) {
...
} else {
//找到servicemanager进程的todo队列
target_list = &target_proc->todo;
target_wait = &target_proc->wait;
}
......
t->sender_euid = task_euid(proc->tsk);
t->to_proc = target_proc; //此次通信目标进程为servicemanager进程
t->to_thread = target_thread;
t->code = tr->code; //此次通信code = ADD_SERVICE_TRANSACTION
t->flags = tr->flags; // 此次通信flags = 0
t->priority = task_nice(current);

...
//分别拷贝用户空间的binder_transaction_data中ptr.buffer和ptr.offsets到内核
copy_from_user(t->buffer->data,
(const void __user *)(uintptr_t)tr->data.ptr.buffer, tr->data_size);
copy_from_user(offp,
(const void __user *)(uintptr_t)tr->data.ptr.offsets, tr->offsets_size);
off_end = (void *)offp + tr->offsets_size;
...

for (; offp < off_end; offp++) {
struct flat_binder_object *fp;
fp = (struct flat_binder_object *)(t->buffer->data + *offp);
off_min = *offp + sizeof(struct flat_binder_object);
switch (fp->type) {
//注册服务的时候,传递的是 BBinder,type是BINDER_TYPE_BINDER
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER:
{
struct binder_ref *ref;
struct binder_node *node = binder_get_node(proc, fp->binder);
if (node == NULL) {
//请求所在进程(也就是这个MediaService进程)创建binder_node实体
node = binder_new_node(proc, fp->binder, fp->cookie);
...
}
//目标进程(也就是ServiceManager进程)创建binder_ref
ref = binder_get_ref_for_node(target_proc, node);
...
//调整type为HANDLE类型
if (fp->type == BINDER_TYPE_BINDER)
fp->type = BINDER_TYPE_HANDLE;
else
fp->type = BINDER_TYPE_WEAK_HANDLE;
fp->binder = 0;
fp->handle = ref->desc; //设置handle值
fp->cookie = 0;
...
}
...
}
}

//将BINDER_WORK_TRANSACTION添加到目标队列,本次通信的目标队列为target_proc->todo
t->work.type = BINDER_WORK_TRANSACTION;
list_add_tail(&t->work.entry, target_list);

//将BINDER_WORK_TRANSACTION_COMPLETE添加到请求线程的todo队列
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
list_add_tail(&tcomplete->entry, &thread->todo);

//唤醒等待队列,本次通信的目标队列为target_proc->wait
if (target_wait)
wake_up_interruptible(target_wait);
return;
}

// 第一次执行完这个函数后会返回到waitForResponse继续执行,由于当前线程的todo队列有任务了,进入binder_thread_read来处理相关的事务.接下来进入到第二次执行到这里的时候 MediaPlayerService 会休眠

static int binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,void __user *buffer, int size, signed long *consumed, int non_block)
{
//当已使用字节数为0时,将BR_NOOP响应码放入指针ptr
if (*consumed == 0) {
if (put_user(BR_NOOP, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
}

retry:
//binder_transaction()已设置transaction_stack不为空,则wait_for_proc_work为false.
wait_for_proc_work = thread->transaction_stack == NULL &&
list_empty(&thread->todo);

thread->looper |= BINDER_LOOPER_STATE_WAITING;
if (wait_for_proc_work)
proc->ready_threads++; //进程中空闲binder线程加1

//只有当前线程todo队列为空,并且transaction_stack也为空,才会开始处于当前进程的事务
if (wait_for_proc_work) {
if (non_block) {
...
} else
//当进程todo队列没有数据,则进入休眠等待状态
ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
} else {
if (non_block) {
...
} else
//当线程todo队列有数据则执行往下执行;当线程todo队列没有数据,则进入休眠等待状态
ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
}

if (wait_for_proc_work)
proc->ready_threads--; //退出等待状态, 则进程中空闲binder线程减1
thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
...

while (1) {

uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w;
struct binder_transaction *t = NULL;
//先从线程todo队列获取事务数据
if (!list_empty(&thread->todo)) {
w = list_first_entry(&thread->todo, struct binder_work, entry);
// 线程todo队列没有数据, 则从进程todo对获取事务数据
} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
w = list_first_entry(&proc->todo, struct binder_work, entry);
} else {
//没有数据,则返回retry
if (ptr - buffer == 4 &&
!(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN))
goto retry;
break;
}

switch (w->type) {
case BINDER_WORK_TRANSACTION:
//获取transaction数据
t = container_of(w, struct binder_transaction, work);
break;

case BINDER_WORK_TRANSACTION_COMPLETE:
cmd = BR_TRANSACTION_COMPLETE;
//将BR_TRANSACTION_COMPLETE写入*ptr,并跳出循环。
put_user(cmd, (uint32_t __user *)ptr);
list_del(&w->entry);
kfree(w);
break;

case BINDER_WORK_NODE: ... break;
case BINDER_WORK_DEAD_BINDER:
case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: ... break;
}

//只有BINDER_WORK_TRANSACTION命令才能继续往下执行
if (!t)
continue;

if (t->buffer->target_node) {
//获取目标node
struct binder_node *target_node = t->buffer->target_node;
tr.target.ptr = target_node->ptr;
tr.cookie = target_node->cookie; //设置cookie
t->saved_priority = task_nice(current);
...
cmd = BR_TRANSACTION; //设置命令为BR_TRANSACTION
} else {
tr.target.ptr = NULL;
tr.cookie = NULL;
cmd = BR_REPLY; //设置命令为BR_REPLY
}
tr.code = t->code;
tr.flags = t->flags;
tr.sender_euid = t->sender_euid;

if (t->from) {
struct task_struct *sender = t->from->proc->tsk;
//当非oneway的情况下,将调用者进程的pid保存到sender_pid
tr.sender_pid = task_tgid_nr_ns(sender,
current->nsproxy->pid_ns);
} else {
//当oneway的的情况下,则该值为0
tr.sender_pid = 0;
}

tr.data_size = t->buffer->data_size;
tr.offsets_size = t->buffer->offsets_size;
tr.data.ptr.buffer = (void *)t->buffer->data + proc->user_buffer_offset;
tr.data.ptr.offsets = tr.data.ptr.buffer +
ALIGN(t->buffer->data_size, sizeof(void *));

//将cmd和数据写回用户空间
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
if (copy_to_user(ptr, &tr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);

list_del(&t->work.entry);
t->buffer->allow_user_free = 1;
if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
t->to_parent = thread->transaction_stack;
t->to_thread = thread;
thread->transaction_stack = t;
} else {
t->buffer->transaction = NULL;
kfree(t); //通信完成,则运行释放
}
break;
}
done:
*consumed = ptr - buffer;
//当满足请求线程加已准备线程数等于0,已启动线程数小于最大线程数(15),
//且looper状态为已注册或已进入时创建新的线程。
if (proc->requested_threads + proc->ready_threads == 0 &&
proc->requested_threads_started < proc->max_threads &&
(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED))) {
proc->requested_threads++;
// 生成BR_SPAWN_LOOPER命令,用于创建新的线程
put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer);
}
return 0;

这里到了服务进程了

ServiceManager在 `binder_loop`的`ioctl()`函数中由于`binder_thread_read()`的`wait_event_interruptible_exclusive()`而进入阻塞状态,在这里被驱动通过 wake_up_interruptible 唤醒了,:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
unsigned readbuf[32];

bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;

readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(unsigned));

for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (unsigned) readbuf;

res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

if (res < 0) {
LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}

res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
if (res == 0) {
LOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res < 0) {
LOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
}

被唤醒后,会在binder_thread_read()中读取binder传过来的数据,赋值到本地局部变量struct binder_transaction_data tr中,接着把tr的内容拷贝到用户传进来的缓冲区 ,返回后再把binder_ioctl()中的本地变量struct binder_write_read bwr的内容拷贝回到用户传进来的缓冲区中,最后从binder_ioctl()函数返回(有任务就break跳出循环返回了),接着执行binder_parse():

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uint32_t *ptr, uint32_t size, binder_handler func)
{
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;

while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
switch(cmd) {
case BR_TRANSACTION: {
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
...
binder_dump_txn(txn);
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;

bio_init(&reply, rdata, sizeof(rdata), 4);
bio_init_from_txn(&msg, txn); //从txn解析出binder_io信息
// 收到Binder事务
res = func(bs, txn, &msg, &reply);
// 向binder驱动发送reply事件
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
}
ptr += sizeof(*txn);
break;
}
case : ...
}
return r;
}

这个函数传入的函数指针是svcmgr_handler,因此会进入到

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
unsigned len;
void *ptr;
uint32_t strict_policy;

if (txn->target != svcmgr_handle)
return -1;

// Equivalent to Parcel::enforceInterface(), reading the RPC
// header with the strict mode policy mask and the interface name.
// Note that we ignore the strict_policy and don't propagate it
// further (since we do no outbound RPCs anyway).
strict_policy = bio_get_uint32(msg);
s = bio_get_string16(msg, &len); // "android.os.IServiceManager"
if ((len != (sizeof(svcmgr_id) / 2)) ||
memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
fprintf(stderr,"invalid id %s\n", str8(s));
return -1;
}

switch(txn->code) {
......
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len); // "media.player"
ptr = bio_get_ref(msg); // new MediaPlayerService()
if (do_add_service(bs, s, len, ptr, txn->sender_euid))
return -1;
break;
......
}

bio_put_uint32(reply, 0); //reply返回0
return 0;

//ServiceManager中的flat_binder_object被改为了BINDER_TYPE_HANDLE,句柄值写到一个struct svcinfo结构体中,
//然后插入到链接svclist的头部去
int do_add_service(struct binder_state *bs,
uint16_t *s, unsigned len,
void *ptr, unsigned uid)
{
struct svcinfo *si;
// LOGI("add_service('%s',%p) uid=%d\n", str8(s), ptr, uid);

if (!ptr || (len == 0) || (len > 127))
return -1;

if (!svc_can_register(uid, s)) {
LOGE("add_service('%s',%p) uid=%d - PERMISSION DENIED\n",
str8(s), ptr, uid);
return -1;
}

si = find_svc(s, len); //根据名字查找引用,这里是 "media.player"
if (si) {
if (si->ptr) {
LOGE("add_service('%s',%p) uid=%d - ALREADY REGISTERED\n",
str8(s), ptr, uid);
return -1;
}
si->ptr = ptr;
} else {
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
if (!si) {
LOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n",
str8(s), ptr, uid);
return -1;
}
si->ptr = ptr;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
si->name[len] = '\0';
si->death.func = svcinfo_death;
si->death.ptr = si;
si->next = svclist;
svclist = si;
}

binder_acquire(bs, ptr);
binder_link_to_death(bs, ptr, &si->death);
return 0;
}

最后,执行binder_send_reply()函数,再次进入到ioctl()函数中,把数据再次封装成一个事务,唤醒 MediaPlayerService去处理,
对于ServiceManager来说,到这里就结束了。IServiceManager::addService执行完毕.

binder_16

数据流向:

  1. BBinder+name 序列化到 Parcel里,

  2. 然后加上 target= 0和code=ADD_SERVICE 转成 binder_transation_data 对象,

  3. 加上cmd = BC_TRANSACTION 写入到IPC_ThreadState的mOut 里面

  4. mOut和mIn 构造一个 binder_write_read 对象, 发送给 binder 驱动

  5. binder 驱动拷贝这个数据到内核,构造一个 binder_transaction_data 发送给服务进程的todo队列,同时向用户进程回复BR_COMPLETE(为什么要拷贝,之前传指针不是很方便吗?因为那里是客户进程,内核可以访问,但是服务进程访问不了,指针没法跨进程使用。)

  6. 服务进程构造 binder_transaction_data 和 BC_REPLY 回复给 binder 驱动

  7. 上面描述了 service_manager 的 binder_loop 循环,对于其他的服务来说,binder线程的启动是通过下面两句来启动的:

    1
    2
    ProcessState::self()->startThreadPool();
    IPCThreadState::self()->joinThreadPool();
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    void IPCThreadState::joinThreadPool(bool isMain)
    {
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
    set_sched_policy(mMyThreadId, SP_FOREGROUND);

    status_t result;
    do {
    processPendingDerefs(); //处理对象引用
    result = getAndExecuteCommand();//获取并执行命令

    if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
    ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
    mProcess->mDriverFD, result);
    abort();
    }

    //对于binder非主线程不再使用,则退出
    if(result == TIMED_OUT && !isMain) {
    break;
    }
    } while (result != -ECONNREFUSED && result != -EBADF);

    mOut.writeInt32(BC_EXIT_LOOPER);
    talkWithDriver(false);
    }

    status_t IPCThreadState::getAndExecuteCommand()
    {
    status_t result;
    int32_t cmd;

    result = talkWithDriver(); //该Binder Driver进行交互
    if (result >= NO_ERROR) {
    size_t IN = mIn.dataAvail();
    if (IN < sizeof(int32_t)) return result;
    cmd = mIn.readInt32(); //读取命令

    pthread_mutex_lock(&mProcess->mThreadCountLock);
    mProcess->mExecutingThreadsCount++;
    pthread_mutex_unlock(&mProcess->mThreadCountLock);

    result = executeCommand(cmd);

    pthread_mutex_lock(&mProcess->mThreadCountLock);
    mProcess->mExecutingThreadsCount--;
    pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
    pthread_mutex_unlock(&mProcess->mThreadCountLock);

    set_sched_policy(mMyThreadId, SP_FOREGROUND);
    }
    return result;
    }

    上文中的分析可以知道,talkWithDriver()会把数据封装成一个事务发送给服务端 ,服务端处理请求后会把数据封装成事务返回,talkWithDriver()则解析返回的事务得到数据并返回。因此这里返回数据后接着调用executeCommand()

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    status_t IPCThreadState::executeCommand(int32_t cmd)
    {
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;

    switch (cmd) {
    ......

    case BR_TRANSACTION:
    {
    binder_transaction_data tr;
    result = mIn.read(&tr, sizeof(tr));
    ALOG_ASSERT(result == NO_ERROR,
    "Not enough command data for brTRANSACTION");
    if (result != NO_ERROR) break;
    //Record the fact that we're in a binder call.
    mIPCThreadStateBase->pushCurrentState(
    IPCThreadStateBase::CallState::BINDER);
    Parcel buffer;
    buffer.ipcSetDataReference(
    reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
    tr.data_size,
    reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
    tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);
    const pid_t origPid = mCallingPid;
    const uid_t origUid = mCallingUid;
    const int32_t origStrictModePolicy = mStrictModePolicy;
    const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;
    mCallingPid = tr.sender_pid;
    mCallingUid = tr.sender_euid;
    mLastTransactionBinderFlags = tr.flags;
    //ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);
    Parcel reply;
    status_t error;
    IF_LOG_TRANSACTIONS() {
    TextOutput::Bundle _b(alog);
    alog << "BR_TRANSACTION thr " << (void*)pthread_self()
    << " / obj " << tr.target.ptr << " / code "
    << TypeCode(tr.code) << ": " << indent << buffer
    << dedent << endl
    << "Data addr = "
    << reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer)
    << ", offsets addr="
    << reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl;
    }
    if (tr.target.ptr) {
    // We only have a weak reference on the target object, so we must first try to
    // safely acquire a strong reference before doing anything else with it.
    if (reinterpret_cast<RefBase::weakref_type*>(
    tr.target.ptr)->attemptIncStrong(this)) {
    error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
    &reply, tr.flags);
    reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
    } else {
    error = UNKNOWN_TRANSACTION;
    }
    } else {
    error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
    }
    mIPCThreadStateBase->popCurrentState();
    //ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",
    // mCallingPid, origPid, origUid);
    if ((tr.flags & TF_ONE_WAY) == 0) {
    LOG_ONEWAY("Sending reply to %d!", mCallingPid);
    if (error < NO_ERROR) reply.setError(error);
    sendReply(reply, 0);
    } else {
    LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
    }
    mCallingPid = origPid;
    mCallingUid = origUid;
    mStrictModePolicy = origStrictModePolicy;
    mLastTransactionBinderFlags = origTransactionBinderFlags;
    IF_LOG_TRANSACTIONS() {
    TextOutput::Bundle _b(alog);
    alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj "
    << tr.target.ptr << ": " << indent << reply << dedent << endl;
    }
    }
    break;

    .......
    }

    if (result != NO_ERROR) {
    mLastError = result;
    }
    return result;
    }

    status_t BBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
    {
    data.setDataPosition(0);

    status_t err = NO_ERROR;
    switch (code) {
    case PING_TRANSACTION:
    reply->writeInt32(pingBinder());
    break;
    default:
    err = onTransact(code, data, reply, flags);
    break;
    }

    if (reply != NULL) {
    reply->setDataPosition(0);
    }

    return err;
    }

    因为服务端继承了BBinder,因此这里实际上会调用服务端的onTransact(),也就是MediaPlayerService的onTransact()函数,执行相应的动作。这样子就从客户端跨进程调用到了服务端。

Native端 客户端获取服务端接口的过程:

在上一节里面分析了BpServiceManager.addService(),这一节来看 getService(string)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
class BpServiceManager : public BpInterface<IServiceManager>
{
......

virtual sp<IBinder> getService(const String16& name) const
{
//如果服务没有准备好,就休眠1秒,循环5次,还没有获取到就返回null,避免ANR
unsigned n;
for (n = 0; n < 5; n++){
sp<IBinder> svc = checkService(name);
if (svc != NULL) return svc;
LOGI("Waiting for service %s...\n", String8(name).string());
sleep(1);
}
return NULL;
}

virtual sp<IBinder> checkService( const String16& name) const
{
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
return reply.readStrongBinder();
}

......
};

这里的调用链大部分都在上一节说过了。因此这里简单描述下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);

status_t BpBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags);

status_t IPCThreadState::transact(int32_t handle,uint32_t code, const Parcel& data,Parcel* reply, uint32_t flags)

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)

status_t IPCThreadState::talkWithDriver(bool doReceive)

long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)

int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,void __user *buffer, int size, signed long *consumed)

static void binder_transaction(struct binder_proc *proc, struct binder_thread *thread,struct binder_transaction_data *tr, int reply)

//唤醒 ServiceManager
wake_up_interruptible(target_wait);

static int binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,void __user *buffer, int size, signed long *consumed, int non_block)

IPCThreadState::talkWithDriver::ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)

//当前线程休眠,等待ServiceManager返回操作结果
ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));

//ServiceManager 被唤醒后
static int binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,void __user *buffer, int size, signed long *consumed, int non_block)
//得到事务t
t = container_of(w, struct binder_transaction, work);

int binder_parse(struct binder_state *bs, struct binder_io *bio,uint32_t *ptr, uint32_t size, binder_handler func)

void bio_init(struct binder_io *bio, void *data,uint32_t maxdata, uint32_t maxoffs)

void bio_init_from_txn(struct binder_io *bio, struct binder_txn *txn)

int svcmgr_handler(struct binder_state *bs,struct binder_txn *txn,struct binder_io *msg,struct binder_io *reply)

void *do_find_service(struct binder_state *bs, uint16_t *s, unsigned len)

struct svcinfo *find_svc(uint16_t *s16, unsigned len)

void bio_put_ref(reply, ptr);
void bio_put_ref(struct binder_io *bio, void *ptr)
{
struct binder_object *obj;

if (ptr)
obj = bio_alloc_obj(bio);
else
obj = bio_alloc(bio, sizeof(*obj));

if (!obj)
return;

obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
obj->type = BINDER_TYPE_HANDLE;
obj->pointer = ptr;
obj->cookie = 0;
}

void binder_send_reply(struct binder_state *bs,struct binder_io *reply,void *buffer_to_free,int status)

int binder_write(struct binder_state *bs, void *data, unsigned len)

long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)

int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,void __user *buffer, int size, signed long *consumed)

static void binder_transaction(struct binder_proc *proc, struct binder_thread *thread,struct binder_transaction_data *tr, int reply)

struct binder_ref *ref = binder_get_ref(proc, fp->handle);
//旧的句柄是ServiceManager进程里面的,这里需要给它一个新的句柄值返回给客户端进程用
new_ref = binder_get_ref_for_node(target_proc, ref->node);
//清理,休眠
...

//唤醒请求的客户端线程
...

static int
binder_thread_read(struct binder_proc *proc, struct binder_thread *thread, void __user *buffer, int size, signed long *consumed, int non_block)

copy_to_user(ubuf, &bwr, sizeof(bwr))

reply->ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t),freeBuffer, this);

//android / platform / frameworks / native / master / . / libs / binder / Parcel.cpp
status_t Parcel::readStrongBinder(sp<IBinder>* val) const
{
status_t status = readNullableStrongBinder(val);
if (status == OK && !val->get()) {
status = UNEXPECTED_NULL;
}
return status;
}

//android / platform / frameworks / native / master / . / libs / binder / Parcel.cpp
status_t Parcel::readNullableStrongBinder(sp<IBinder>* val) const
{
return unflatten_binder(ProcessState::self(), *this, val);
}

//android / platform / frameworks / native / master / . / libs / binder / Parcel.cpp
status_t unflatten_binder(const sp<ProcessState>& proc,
const Parcel& in, sp<IBinder>* out)
{
const flat_binder_object* flat = in.readObject(false);
if (flat) {
switch (flat->hdr.type) {
case BINDER_TYPE_BINDER:
*out = reinterpret_cast<IBinder*>(flat->cookie);
return finish_unflatten_binder(nullptr, *flat, in);
case BINDER_TYPE_HANDLE:
*out = proc->getStrongProxyForHandle(flat->handle);
return finish_unflatten_binder(
static_cast<BpBinder*>(out->get()), *flat, in);
}
}
return BAD_TYPE;
}

//根据句柄生成BpBinder()
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;

AutoMutex _l(mLock);

handle_entry* e = lookupHandleLocked(handle);

if (e != NULL) {
// We need to create a new BpBinder if there isn't currently one, OR we
// are unable to acquire a weak reference on this current one. See comment
// in getWeakProxyForHandle() for more info about this.
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a primary
// reference to the remote proxy when this team doesn't have one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}

return result;
}

android::sp<IMediaPlayerService> IMediaPlayerService::asInterface(const android::sp<android::IBinder>& obj)

最终得到一个BpMediaPlayerService对象

总结一下(c++部分)

  1. 获取 ServiceManager 远程接口的时候,不需要跨进程,因为ServiceManger的binder实体固定句柄为0,只需要new BpBinder(0) 就可以得到binder引用,拿到 BpServieManager
  2. 获取普通服务的远程接口的时候,需要跨进程调用,因为需要通过 BpServieManager 向ServiceManger请求,ServiceManager会返回名字对应的服务的Binder实体的句柄给驱动程序,驱动程序读出来后序列化后返回给客户端,客户端拿到以后就可以new BpBinder(handle)拿到普通服务的远程代理对象了。
  3. 调用 ServiceManager 的功能的时候(比如addservice,getService), ServiceManager 是在binder_loop函数中解析 驱动传过来的数据后,直接处理,然后返回数据给驱动程序。 而 调用普通服务的功能的时候,拿到 驱动传过来的数据后会调用到BBinder的虚函数去处理
  4. IPCThreadState类借助ProcessState类来负责与Binder驱动程序交
  5. 需要注意的是,比如我们在addService中传入一个BBinder对象,会通过writeStrongBinder()序列化成一个flat_binder_object后传给驱动,而在getService()的时候,驱动返回的也是一个包含服务端句柄的 flat_binder_object对象,这个对象会被readStrongBinder()函数解析成一个BpBinder对象返回给调用方。

流程总结:
binder_8
binder_21

参考:
《深入理解Android 卷1》
《深入理解Android 卷3》
Android进程间通信(IPC)机制Binder简要介绍和学习计划
深入分析Android Binder 驱动
红茶一杯话Binder
深入理解Binder原理
Parcel数据传输过程,简要分析Binder流程