技术开发 频道

Windows Embedded CE 6.0 Internals 上

 3.中断机制

 下图为中断模型:

 (1).设备发起一个硬件中断。->

 (2).内核(Kernel.DLL)响应该中断,并call对应的中断服务例程(ISR)。->

 (3).中断服务例程快速处理该中断。->

 (4).驱动中的中断服务线程(IST)被通知处理该中断,这里内核使用Event通知IST。

 这个过程中涉及到的ISR和IST都会处理中断,但有所不同,前者的中断级别更高,所做的工作也很简单,而后者做主要的处理工作。具体请看以下的介绍。

 "Real-time applications use interrupts to respond to external events in a timely manner. To do this, Windows Embedded CE 6.0 breaks interrupt processing into two steps: an interrupt service routine (ISR) and an interrupt service thread (IST). The ISR runs immediately to identify and mask the interrupt, and perform any high priority tasks. The corresponding IST is a normal system thread (although typically of high priority) and can perform the bulk of the handling that is not time critical. This two stage model allows the operating system to maximize the amount of time the system is able to respond to other high priority interrupts.

 The kernel is able to handle a total of 64 interrupts from external sources, some of which are predefined (e.g. system timer interrupt, real time clock etc). Devices that have more than 64 interrupt sources that need to be exposed (rare) must implement a mechanism to share interrupt identifiers. Typically this is done by multiplexing related interrupts together in the ISR, and demultiplexing them in the IST.”

 关于中断在此就介绍这么多,更详细的内容等我慢慢消化了再介绍。

 4.线程、线程调度、线程同步

 线程是被系统独立调度和分派的基本单位,当系统创建一个进程时,至少会存在一个线程(主线程)。所以进程可以被理解为一个壳子。关于线程更多基本知识在此略去。

 以下为Windows Embedded CE 6.0的线程各优先级,0优先级最高,255优先级最低:

 Priority

 Component

 0 - 96

 Typically reserved for real-time drivers

 97-152

 Used by the default Windows Embedded CE–based device drivers

 153-247

 Typically reserved for non-real-time drivers

 248-255

 Mapped to other non-real-time priorities

 应用程序一般运行在248-255优先级,被创建的线程默认优先级为251。

 另外我们要注意的一个很重要问题是优先级反转(Priority Inversion)问题,比如现在有三个优先级不同的线程A、B、C, A的优先级最高,B次之,C最低。其中A和C需要的资源部分相同。处理器当前执行线程C,比C优先级更高的B有可能打断C而进入运行状态,这样C占有的资源什么时候释放就是一个未知的时间。A只有在C释放了它所需要的资源后才能被调度,A被阻塞的时间也是未知的。这样,低优先级的B先于高优先级的A被调度,优先级发生了逆转。

 这个问题在XP里面不是一个严重的问题,最多A被多阻塞了一段时间。但是,在实时系统里面,特别是硬实时系统里是个很严重的问题。这个问题的解决方法一般有两种,Windows Embedded CE 6.0采用的后一种方法,从下面的图可以看出优先级反转是怎样的。

 "Single Level and Fully Nested. In the Fully Nested Mode the OS will walk through all threads blocked and keep boosting each one until the high priority thread can run. This prevents an entire class of deadlocks. Unfortunately it also means an O(n) operation with pre-emption turned off while the scheduler figures out how to get everything unblocked to keep things going. This is a major problem for real-time systems that need deterministic response times.

 In order to support hard real-time systems WindowsCE 6.0 uses a single level handling of priority inversion. That is the OS will boost only one thread to release a block. In this scenario the OS will boost the priority of the low priority thread to the priority of the high priority thread until it is able to release needed resource. This is It is therefore the responsibility of the developer to structure code such that deadlocks are avoided. “

 更多相关知识请看维基和Embedded网站。

 线程相关API

 CeSetThreadPriority/CeGetThreadPriority在CE中可以访问整个256个优先级。

 SetThreadPriority/GetThreadPriority是遗留下来的函数,在新版本CE系统中仍然可以使用,但是只能访问最低的8个优先级(248-255)。

 Sleep(n)至少挂起n毫秒。

 Sleep(0)放弃时间片,执行其它线程。

 SleepTillTick挂起线程直到下一次系统tick。

 WaitForSingleObject阻塞直到指定的内核对象被置为有信号状态。

 WaitForMultipleObjects阻塞直到指定的内核对象集(大于等于1)被置为有信号状态。

 线程同步

 被用于线程同步的对象有很多种:临界区(Critical Setions)、互斥体(Mutexes)、信号量(Semaphores)、事件(Events)、Interlocked Fuctions,这里对此进行简单的介绍,关于更详细的资料(比如临界区和互斥体的本质区别以及性能比较)请从网络上查阅,比如这里。

 在CE 6.0系统中每个同步对象都有自己独立的名字空间,比如一个空字符串"”被作为一个名字对象处理。在桌面系统中所有同步对象是共享名字的空间的。

 1.Critical Sections

 使用临界区的好处是,当你需要的资源不被其它线程竞争时是不会进入内核的,所以在同个进程里,并且很少的资源竞争存在时,使用临界区会更好。

 InitializeCriticalSection初始化临界区数据结构。

 EnterCriticalSection 线程会阻塞一直到取得临界区的所有权。

 TryEnterCriticalSection 尝试取得临界区的所有权,不会造成线程阻塞,如果当前竞争存在将会失败。

 LeaveCriticalSection 释放临界区的所有权。

 DeleteCriticalSection 清理资源。

 2.Mutexes

 CreateMutex 创建命名的或未命名的互斥体对象。

 A handle to the mutex object indicates success. If the named mutex object existed before the function call, the function returns a handle to the existing object, and GetLastError returns ERROR_ALREADY_EXISTS. Otherwise, the caller created the mutex.

 NULL indicates failure. To get extended error information, call GetLastError.

 Non-blocking with return status for already exists or abandoned.

 WaitForSingleObject/WaitForMultipleObjects Calls blocked until current owner releases specified mutex object. Calls non-blocking while waiting for a mutex object it already owns.

 ReleaseMutex Called once per call returned from Wait function. Abandoned state if not called before owner thread terminates.

 CloseHandle Releases and Destroys mutex object upon last handle close.

 3.Semaphores

 CreateSemaphore Creates named or unnamed semaphore object if it doesn’t already exist.

 WaitForSingleObject/WaitForMultipleObjects Calls blocked until semaphore count is non-zero. Semaphore count decreased when wait succeeds.

 ReleaseSemaphore Increments semaphore count by specified amount.

 CloseHandle Destroys a semaphore object upon closing its last handle.

 4.Manual Events

 Signaled with SetEvent. Kernel releases all waiting threads. Kernel releases all subsequently waiting threads. Must explicitly set to non-signaled with ResetEvent.

 Signaled with PulseEvent. Kernel releases all waiting threads. Kernel automatically transitions event to non-signaled.

 5.Autoreset Events

 Signaled with SetEvent. Kernel releases a single waiting thread. Other waiting threads remain blocked. Kernel automatically transitions event to non-signaled. Event remains signaled until a single thread is released.

 Signaled with PulseEvent. Kernel releases at most one waiting thread. Kernel automatically transitions event to non-signaled. Even if no thread has been released.

 6.Interlocked Functions

 解决对资源的原子访问问题。

 The interlocked functions allow a thread to safely perform a read/modify/write sequence on 32 bit aligned shared data without disabling interrupts or incurring the overhead of a kernel call. The interlocked functions are the most efficient mechanism for safely performing this type of access. The kernel implements this functionality by restarting the function call if it is interrupted by an interrupt or data abort before completing.

 InterlockedIncrement Increment a shared variable and check resulting value.

 InterlockedDecrement Decrement shared variable and check resulting value.

 InterlockedExchange Exchange values of specified variables.

 InterlockedTestExchange Exchange values when a variable matches.

 InterlockedCompareExchange Atomic exchange based on compare.

 InterlockedCompareExchangePointer Exchange values on atomic compare.

 InterlockedExchangePointer Atomic exchange of a pair of values.

 InterlockedExchangeAdd Atomic increment of an Addend variable.

查看原文地址

0
相关文章