Open Theses & Dissertations
Permanent link for this community
Tuwhera Open Access Theses & Dissertations contains digital copies of theses, dissertations and research projects from AUT's postgraduate research, deposited with the Library since 2002. The full text digital files are available if the author has given permission for their thesis, dissertation or research projects to be available open access.
Deposit your thesis
Postgraduate students, deposit your thesis here before you graduate.
If you have questions with regards to thesis deposit, please contact us by email.
Browse
Browsing Open Theses & Dissertations by Supervisor "Al-Anbuky, Adnan"
Now showing 1 - 20 of 25
Results Per Page
Sort Options
- ItemAdaptive Quality of Service for IoT-based Wireless Sensor Networks(Auckland University of Technology, 2018) Syed Nor Azlan, Syarifah Ezdiani; Al-Anbuky, Adnan; Sarkar, NurulThe future of the Internet of Things (IoT) is envisaged to consist of a high amount of wireless resource-constrained devices connected to the Internet. Moreover, a lot of novel real-world services offered by IoT devices are realised by wireless sensor networks (WSNs). Integrating WSNs to the Internet has therefore brought forward the requirements of an end-to-end quality of service (QoS) guarantees. In this thesis, a QoS framework for integrating WSNs with heterogeneous data traffic is proposed. The concept of Adaptive Service Differentiation for Heterogeneous Data in WSN (ADHERE) is proposed based on the varying QoS factors and requirements analysis of mixed traffic within an IoT-based WSN. The objective of the QoS framework is to meet the requirements of heterogeneous data traffic in the WSN - in the domain of timeliness and reliability. Another objective is to implement an adaptive QoS scheme that can react to dynamic network changes. This thesis provides the literature analysis and background study for integrating a WSN which contains heterogeneous data traffic with the Internet. In the discussion of network modelling and implementation tools for the testing, this thesis provides an insight into the different tools that are available and their ability to investigate the concept of service differentiation among heterogeneous traffic within the IoT-based WSN network. Furthermore, the major components of ADHERE are presented in the Concept chapter. The major components are: a heterogeneous traffic class queuing model that encompasses a service differentiation policy, a congestion control unit and a rate adjustment unit that supports the adaptive mechanism. Network modelling and the simulation of an ADHERE QoS framework which is carried out primarily using the network simulation tool, Riverbed Modeler, are also presented. Additionally, a proposed co-simulation between Riverbed Modeler and MATLAB is introduced, which aims to provide a seamless QoS monitoring using the ADHERE concept. The simulation results suggest that real-time traffic achieves low bound delay while delay-tolerant traffic experiences a lower packet drop. This indicates that the needs for real-time and delay-tolerant traffic can be better met by treating both packet types differently using ADHERE. Furthermore, a verification and added-value to the ADHERE QoS model using a neural network is also presented. The learning capabilities in ADHERE optimise the QoS framework’s performance by accommodating the QoS requirements of the network through the unpredictable traffic dynamics and when complex network behaviour takes place. Before concluding the thesis, the implementation of ADHERE QoS as a use-case on a physical test environment is also discussed. The test environment offers a flexible system that is capable of reacting to the dynamic changes of process demands. Physical network performance can be predicted by analysing the historical data in the background on a network simulator or virtual network. Finally, this thesis offers a conclusion with an indication of our future research work.
- ItemAn authenticated key agreement scheme for sensor networks(Auckland University of Technology, 2014) Yang, Mee Loong; Al-Anbuky, Adnan; Liu, WilliamIn wireless sensor networks, the messages between pairs of communicating nodes are open to eavesdropping, tampering, and forgeries. These messages can easily be protected using cryptographic means but the nodes need to share a common secret pairwise key. This thesis proposes a new scheme, the Blom-Yang key agreement (BYka) scheme, that enables pairs of sensor nodes in large networks to compute their pairwise keys quickly and efficiently. Prior to deployment, the Trusted Authority (TA), assigns each node their public IDs, and using its master keys, computes and stores in the nodes their private key-sets. When a pair of nodes need to obtain their pairwise keys, they exchange their public key identifier IDs which are just 16-bit integers. Using the counterpart's ID with its own set of private keys, the nodes are able to compute a large common pairwise key, but only if they have obtained their keying material from the same TA. Hence, the scheme is also mutually authenticating. The computations use simple arithmetic operations which are fast and efficient, easily undertaken by sensor devices which have limited computational, memory, and energy resources. For example, it is able to compute keys of 128 bits in 279 milliseconds in the MICAz mote, requiring 1170 bytes of memory to store the private keying material. Similar key agreement schemes, already widely used in computer networks, use public key cryptographic algorithms which require computationally expensive mathematical operations, taking much longer time, and requiring much more resources. The security of the BYka scheme is based on the difficulty of obtaining information about the private-public-master-key associations (PPMka). The private keys in each node are computed by the TA using all the permutations of its multiple master keys and the node's public keys operating over a small prime field, and then stored in a random order in the node. If these are captured, the private keys cannot be used directly as the adversary would first have to discover the PPMka. The analysis showed that, with suitable keying parameters, even if sufficient number of private keys are stolen, an adversary with powerful computing resources would need to expend an infeasibly large amount of time and resources to try all the possible PPMka to break the scheme. The adversary may try to discover the PPMka by using pairs of captured nodes to compute their pairwise keys, but this would require the capture of tens of thousands of nodes. Alternatively, even when using the most efficient method, the adversary needs to try a large number of possibilities equivalent to security strengths of 80 to 192 bits. Overall, the adversary has only a small probabilistic chance of breaking the scheme. These analytical results were verified using computer simulated attacks and are used to provide some guidelines and tables for the selection of the keying parameters to meet implementation and performance requirements including computation times, memory availability, network sizes, and pairwise key sizes. The proposed key agreement scheme is in effect a non-interactive identity-based scheme which uses the node's identity (ID) as its public key. This allows a node to encrypt messages to a target node once its ID is known. It can be used by nodes in dynamic, mobile and ad hoc situations to opportunistically send authenticated messages to each other when they are in range. A single message authenticated protocol (SMAP) using the BYka scheme as the cryptographic primitive is proposed. The speed, efficiency, and resilience of the BYka scheme would make it useful as the cryptographic primitive in other applications such as email and voice communications.
- ItemBluetooth information exchange network(Auckland University of Technology, 2008) Liu, Xiaoning; Al-Anbuky, AdnanBluetooth is a low cost and low power wireless technology for connecting portable and / or fixed Bluetooth enabled devices to form short-range wireless ad hoc personal area networks (PANs). As the Bluetooth specification does not specify a protocol to form ad hoc Bluetooth networks, a method for forming an efficient Bluetooth network under a practical networking scenario is still an open research problem. This thesis introduces an approach to implement an indoor ad hoc Bluetooth wireless network, Bluetooth information exchange network (BIEN). This network formation is based on Bluetooth and Java technologies. A set of Bluetooth enabled devices configured with the BIEN software application are able to spontaneously establish a dynamic multi-hop wireless network using Bluetooth technology without the need of formal network infrastructure, centralized administration, fixed routers or access points. In this study, the performance evaluation focuses on the relation between network capacity and topology by testing end-to-end performance in terms of throughput and the latency of communication links with various parameters, including the hop number between nodes and the number of slaves in piconets. The evaluation results show that the throughput reduces with the increased length of a path, and with an increase in the number of slaves in a piconet in the network. The latency also increases with path length, and with the number of slaves in a piconet in the different experimental BIENs, whether if there is traffic or not in the networks. Experimental results have further confirmed the necessity to minimize the number of bridge nodes in the Bluetooth networks due to their traffic bottleneck effect. This work is an attempt at implementation of a distributed multi-hop scatter net with an integrated routing protocol in the practical environments, while most of the literature focuses on covering the modelling of it. It intends to demonstrate how Bluetooth technology with Java technology can be used to design, develop and deploy ad hoc wireless networks with the commercial Bluetooth devices, and examine how well Bluetooth technology supports ad hoc multi-hop wireless network technology.
- ItemA cross-layer design for sensor-based ambient intelligence systems(Auckland University of Technology, 2014) Liu, Yang; Seet, Boon-Chong; Al-Anbuky, AdnanThe wireless sensor network (WSN) is an enabling technology of ambient intelligence (AmI) where an intelligent system can sense the presence of and respond to the context or situations of people in the environment. AmI relies on the massive deployment of interconnected and distributed sensor devices to provide personalised services via intuitive interfaces and natural interactions in a manner consistent with the user contexts. Cross-layer approaches have been widely used for WSN management and play an important role in designing solutions for protocol optimisation. The cross-layer approaches allow the sharing of information in a protocol stack across different layers for significant improvements on network performance and efficiency. After an extensive literature review, it emerges that there exists research opportunities on cross-layer designs for WSNs in context-aware systems. Therefore, the research presented in this thesis is to develop a cross-layer optimisation approach for WSNs by utilising the user and environment context information from an AmI system. This approach can provide the resource-constrained sensor devices with the capability to understand the situations of their surroundings for the purpose of optimising WSN communications.
- ItemCyber Physical System for Pre-operative Patient Prehabilitation(Auckland University of Technology, 2022) Al-Naime, Khalid Abdulrazak Mahmood; Al-Anbuky, Adnan; Mawston, GrantAbdominal cancer is the one of the most frequent and dangerous cancers in the world, particularly among the elderly, and is considered one of the leading causes of death in New Zealand and throughout the world. Major surgery is associated with a significant deterioration in quality of life, as well as a 20%-40% reduction in postoperative physical function. Physical fitness and level of activity are considered important factors for patients with cancer undergoing major abdominal surgery. These patients are often given exercise programmes prior to surgery (prehabilitation), aimed at improving fitness to reduce perioperative risk. Even though the number of prehabilitation programmes has increased over the last decade, there are many obstacles preventing large numbers of patients being involved in such programmes. One key problem is access to prehabilitation facilities and resources. The long-distance travel to vital cancer services can have a significant impact on a patient’s quality of life and survival. Furthermore, limited numbers of healthcare centres and staff impact on the number of patients who can participate in supervised prehabilitation programmes. Unsupervised prehabilitation programmes have problems such as uncertainty of compliance with home-based exercises. Also lacking are measurements for the movements that are performed in relation to the intended frequency and intensity. Patient safety is also an issue with an unsupervised programme. To minimise the above barriers, a model for a mixed mode prehabilitation programme has been designed. An environment for hosting the prehabilitation tracking model has also been developed. The end result proposes an end-to-end solution that provides patients and healthcare staff with a real-time remote monitoring and visualisation system. Furthermore, architectural features were recruited for this work to balance the computational load between the IoT device, gateway and cloud. This has facilitated better usage of the available environment through fewer messages, and the sharing of resources has reflected positively on overall system performance, such as: a. The system showed high performance with activity recognition percentages ranging from 70%-94% when using the personalised database. b. Different logical methods (M1, M2, M3, and M4) for activity recognition were implemented and embedded at the gateway level. c. Using a mixed mode enabled detecting both casual and formal activities relevant to the prehabilitation programme. Also, the system offers real-time feedback on patients’ progress during the prehabilitation period. On the other hand, many challenging areas require additional research to provide better system performance, such as using artificial intelligence (AI) techniques in various embedded IoT devices and differentiating between the different weights credited to different types of movement and activities. This thesis is divided into seven different chapters, each accounting for a specific element of the overall work. The motivational background for the rising demand for healthcare monitoring is presented in the first chapter. The second chapter accounts for a critical review of the existing literature pertaining to the various key elements and boundaries associated with constructing a mixed mode prehabilitation model. The third chapter provides information related to the tools used for the implementation of hardware and software in the testing and verification of concepts. Chapter 4 proposes a conceptual mixed mode prehabilitation model based on existing rules and health programmes. Chapter 5 examines the various components of CPS in terms of data collection, data analysis, activity recognition, data visualisation, and short- and long-term data storage. Chapter 6 presents the clearly defined validation output data of the developed mixed mode prehabilitation model. The conclusions of this thesis, as well as the future path of the work, are presented in Chapter 7. Finally, this work has delivered four articles that have been published in international journals and conferences, and two proposed papers are under development to state the research outcome.
- ItemDevelopment of a Wireless Friendly Fire Prevention System Model(Auckland University of Technology, 2010) Walker, Craig Graham; Al-Anbuky, AdnanWhile hunting animals is not considered the most ethical of activities, it is in most regions of the world considered sport, and in some locations still a process of food gathering. The focus of this research is on the accidental shooting of hunters by hunters. The hunting accident is an all to common event experienced throughout the world since mankind first hunted for a meal with a ranged weapon. Now with the use of wireless technology an electronic safeguard system can be designed that will aid in preventing hunting accidents. This thesis will present a method of friendly fire prevention and will attempt to test this concept as a viable solution to the problem. The concept presented here investigates into the use of location based systems combined with directional data acquisition systems, integrated with a networking ability to pass data between sensor interfacing clones of itself. The principle of the concept is to use both the sensor and networked data acquired to conclude on a possible dangerous shot situation and hence gaining the ability to alert the hunter who’s aim is causing the dangerous situation. This project is composed of the generation of a modelling application that is used to generate dangerous shooting situations between simulated hunters and to test the concept friendly fire prevention method desired. Further more this project contains the development of a physical prototype with contained embedded code written to simulate sections of the desired friendly fire prevention method. This prototype system has the chosen sensors individually tested to show the level and quality of data available to the intelligence of the prototype. This intelligence is in turn tested as a completed example of the concept functioning. Results from the model will show that even a simple system can provide up to 17% protection coverage from all dangerous shots up to 1000m. Other specific results generated indicated that approximately 25% of hunters mistakenly targeted due to vegetation up to a range of 500m could be saved using this friendly fire prevention method.
- ItemDistributed incremental data stream mining for wireless sensor network(Auckland University of Technology, 2012) Sabit, Hakilo; Al-Anbuky, Adnan; GholamHusseini, HamidWireless sensor networks (WSNs) despite their energy, bandwidth, storage, and computational power constraints, have embraced dynamic applications. These applications generate a large amount of data continuously at high speeds and at distributed locations, known as distributed data stream. In these applications, processing data streams on the fly and in distributed locations is necessary mainly due to three reasons. Firstly, the large volume of data that these systems generate is beyond the storage capacity of the system. Secondly, transmitting such large continuous data to a central processing location over the air exhausts the energy of the system rapidly and limits its lifetime. Thirdly, these applications implement dynamic models that are triggered immediately in response to events such as changes in the environment or changes in set of conditions and hence, do not tolerate offline processing. Therefore, it is important to design efficient distributed techniques for WSN data stream mining applications under these inherent constraints. The purpose of this study was to develop a resource efficient online distributed incremental data stream mining framework for WSNs. The framework must minimize inter-node communications and optimize local computation and energy efficiency without compromising practical application requirements and quality of service (QoS). The objectives were to address the WSN energy constraints, network lifetime, and distributed mining of streaming data. Another objective was to develop a novel high spatiotemporal resolution version of the standard Canadian fire weather index (FWI) system called the Micro-scale FWI system based on the framework. The perceived framework integrates autonomous cluster based data stream mining technique and two-tiered hierarchical WSN architecture to suit the distributed nature of WSN and on the fly stream mining requirements. The underlying principle of the framework is to handle the sensor stream mining process in-network at distributed locations and at multiple hierarchical levels. The approach consists of three distinct processing tasks asynchronously but cooperatively revealing mining the sensor data streams. These tasks are the sensor node, the cluster head, and the network sink processing tasks. These tasks were formulated by a lightweight autonomous data clustering algorithm called Subtractive Fuzzy C-Means (SUBFCM). The SUBFCM algorithm remains embedded within the individual nodes to analyze the locally generated streams ‘on the fly’ in cooperation with a group of nodes. The study examined the effects of data stream characteristics such as data stream dimensions and stream periods (data flow rates). Moreover, it evaluated the effects of network architectures such as node density per cluster and tolerated approximation error on the overall performance of the SUBFCM through simulations. Finally, the QoS or certain level of guaranteed performance that is supported by the WSN architecture for applications utilizing the framework was examined. The results of the study showed that the proposed framework is stream dimension and data flow rate scalable with average errors of less than 12% and 11% in reference to the benchmarks, respectively. The node density per cluster and local model drift threshold showed significant effects on the framework performance only for very fast streams. The study concludes that the network architecture is an important factor for the quality of mining results and should be designed carefully to optimally utilize basic concepts of the framework. The overall mining quality is directly related to the combined effect of the stream characteristics, the network architecture, and the desired performance measures. The study also concludes that WSNs can provide good QoS feasible for online distributed incremental data stream mining applications. Simulations of real weather datasets indicate that the Micro-scale FWI can excellently approximate the results obtained from the Standard FWI system while providing highly superior spatial and temporal information. This can offer direct local and global interaction with a few meter square spaces as against the tens of square kilometers of the present systems.
- ItemDistributed Trust-based Routing Decision Making for WSN(Auckland University of Technology, 2019) Khalid, Nor Azimah; Bai, Quan; Al-Anbuky, AdnanThis thesis describes novel approaches to deal with routing in distributed wireless sensor networks (WSNs) decision making and proposes new distributed protocols based on trust. The trust is defined as the level of belief that a sensor node has on another node for specific action, based on certain criterion that is specified according to applications. As WSNs are applications specific, the proposed trust-based solutions are mainly targeting at two types of network structures, namely, the static homogeneous network, and the network with mobile sink. The first contribution of the thesis is a multi criteria trust model called Hierarchical Trust-based Model (HTM). The model considers several criteria and evaluates the trustworthiness of a node in two levels. HTM is different from most of the existing trust models as it evaluates the trust for multiple nodes rather than a single node evaluation. The model uses the Analytical Hierarchical Process (AHP) in computing the node's trust. The second contribution is a novel distributed trust-based protocol called Adaptive Trust-based Routing Protocol (ATRP). The proposed ATRP embed the proposed HTM in its process. Four network performance metrics (energy, reliability, coverage and reputation) were considered in the forwarder selection. The reputation, which is the accumulated value provided by indirect nodes about evaluated nodes previous communication behaviours is gained using Q-learning. ATRP takes into consideration the resource constrained factors of the nodes by introducing several control mechanisms (timeliness and number of interactions). Thirdly, the thesis considers the implementation of the mobile sink and taken into consideration the relocation issue which is the main concern in existing distributed mobile sink routing. A new distributed mobile sink routing protocol called Blockchain-based Routing Protocol (BCRP) is presented where it adapts the blockchain elements in its relocation decision strategy. The decision in BCRP is determined by other mobile sinks in ensuring the relocation position is not redundantly covered. This is because the redundant coverage in some applications are unnecessary and will consume more energy. The participating mobile sinks are able to make decisions without the central entity's help but based on a set of rules that are pre-agreed by all mobile sinks. The relocation will only happen if it is agreed (verified) by a certain number of mobile sinks. In such situations, the decision making will benefit a larger number of nodes and all nodes are able to get updated information. The performances of BCRP are evaluated and compared under several simulation environments in terms of five performance metrics, i.e., energy consumption, packet delivery ratio, average delay, throughput and coverage level. Based on the simulation results, the proposed approaches outperform the other comparable protocols for all the performance metrics.
- ItemEnergy aware survivable routing approaches for next generation networks design(Auckland University of Technology, 2013) Luo, Bing; Liu, William; Al-Anbuky, AdnanCurrently, with the booming development of Next Generation Networks (NGNs), there is an urgent request for reducing energy in telecommunication networks due to its environmental impact and potential economic benefits. However, the most existing green networking approaches take no or less consideration on network survivability aspect. This thesis aims to tackle the trade-off problem between energy efficiency and network survivability. In this thesis, we optimize this trade-off problem by using energy aware survivable routing approaches. This sort of trade-off problem falls in the class of capacitated multi-commodity minimum cost flow (CMCF) problems i.e., the problem in which multiple commodities have to be routed over a graph with some constraints. Generally speaking, this problem is also categorized as combinatorial optimization, which can be precisely modelled using Integer Linear Programming (ILP) formulation. The ILP is a mathematical method for determining the best feasible solution to achieve an optimal objective such as maximum profit or lowest cost by given the mathematical models for a list of requirements and constraints represented as linear relationships. Using ILP formulas, we propose three energy aware survivable routing models, which are Energy Aware Backup Protection 1+1 (EABP 1+1), Energy Aware Backup Protection 1:1 (EABP 1:1), Energy Aware Shared Backup Protection (EASBP). From energy saving aspect, we integrate several energy efficient approaches into them, such as energy aware routing, sleeping mode, and energy consumption rating strategies. For network survivability concern, EABP 1+1, EABP 1:1, and EASBP are embedded with 1+1 backup protection, 1:1 backup protection, and shared backup protection respectively. Moreover, for performance comparison, the three models have been implemented in IBM ILOG CPLEX Optimization Studio and solved by CPLEX 11.1 Solver. Moreover, since the CPLEX Optimization Studio can only produce theoretical results, we have developed and integrated the three energy aware survivable routing models into TOTEM (TOolbox for Traffic Engineering Methods) network simulator for better visualization. We have conducted extensive case studies to validate these three models. The most energy efficient model – EABP 1:1 has been found, it could save up to 90% of energy consumption compared with the worst-case multi-commodity flow (MCF) algorithm, due to the combinational use of energy aware routing, sleeping mode strategies and energy consumption rating. In addition, the sleeping mode is an effective approach to reduce energy cost, and EABP 1:1 can save up to half energy usage than EABP 1+1 by introducing sleeping mode. However these two models consume a significant amount of capacity for network survivability purpose. Therefore EASBP has been proposed and the numerical results have confirmed that it is the best solution to tackle the trade-off problem between energy reduction and network survivability. This model consumes significantly less capacity with a small sacrifice on energy expenditure, especially under the condition of large traffic demands flowing in network.
- ItemEnergy efficient opportunistic connectivity for wireless sensor network(Auckland University of Technology, 2013) Sivaramakrishnan, Sivakumar; Al-Anbuky, AdnanThis thesis provides a theoretical analysis of the effects of mobility, node density and a limited transmission range on the connectivity of a varying density of nodes in wireless sensor networks. Connectivity in cellular networks has the advantage of a fixed centralised infrastructure that can provide wide communication coverage. Wireless sensor networks, on the other hand, have a limited range. This limited range, coupled with nodes’ mobility, often results in network holes. As the architecture is de-centralised, there is no central node that monitors the nodes’ joining or leaving the network. The challenge of identifying these nodes, which is due to their dynamic nature of movement, is presented here. Opportunistic connectivity addresses the challenge of providing connectivity to isolated mobile nodes. This is through the process of discovery of regions where good density of network nodes are available. The concept involves four key components. These are adaptive sampling, coverage, handoff and directional communication. These act on the minimisation of energy cost incurred with the discovery of related nodes and establishment of connectivity in the network. The window of time for communication is extended in an energy–efficient manner through coverage, handoff and direction for such delay–tolerant networks. The overall contribution of this thesis is a protocol design for opportunistic connectivity, its implementation and analysis, with reference to the conservation of energy and reduction of packet drops, in conjunction with protocol testing on an application scenario. The thesis is structured into seven chapters. The first two chapters provide the background and the literature analysis. The third chapter deals with systems and tools which are used for the modelling and testing. It gives an insight into the different available tools and their ability to validate the parameter of our concept of an opportunistic connectivity protocol. Subsequently, the thesis discusses the design of the ‘adaptive Energy COnscious DElay Tolerant OpportUnistic Routing’ (ECO-DETOUR) protocol for such delay–tolerant networks in chapter four, as a four stage process involving adaptive sampling, coverage, direction and hand-off. Design of the protocol is followed by implementation in chapter five, which was performed using the OPNET and MATLAB environments. The chapter details the different conditions in which each of the four parameters are triggered and discusses the implementation of each of the four parameters as pseudo-code. Finally in chapter six the protocol is tested on a wildlife application scenario. The effectiveness of the protocol is measured in relation to the energy saved and the reduction in number of packet drops achieved under different mobility conditions. Results show that ECO-DETOUR achieves a 45% - 60% reduction in expended energy to set up communication and exchange data packets. The bulk of the saving in energy by the ECO-DETOUR protocol comes from adaptive sampling which is followed by coverage, handoff and direction.
- ItemIoT-Based Sensor Networks: Architectural Organization, Virtualization and Network Re-orchestration(Auckland University of Technology, 2021) Acharyya, Indrajit; Al-Anbuky, Adnan; Sivakumar, SivaramakrishnanInternet of ‘Things’ (IoT), an extension of localised ‘Wireless’ Sensor Networks (WSN), has been employed to realize a multitude of smart, intelligent and pervasive Cyber Physical System (CPS) infrastructures. CPS encompasses a host of technological and architectural challenges like low-power communication, protocol conversions, data transport and the ability to interoperate with other IoT technologies. This makes CPS significantly complex and reduces its flexibility to adapt. A typical IoT-based sensor network may to a certain extent, lack key softwarization-enabled operational drivers that may introduce significant constraints on its ability to flexibly engage with its external surroundings. Flexible re-orchestration of such complex IoT based sensor networks, however, is vital towards aligning system ‘dynamics’ with that of a monitored ‘physical’ phenomenon while operating in dynamic physical environments (for example WSNs deployed for outdoor applications such as forest fire monitoring). Arguably, considerable levels of operational flexibility can be achieved through a cloud-based architectural framework that hosts the required operating tools to allow for software-defined network virtualization that enable suitable re-orchestrations of the related physical network. In a nutshell, research work documented within this thesis endeavours towards rendering a sensor network capable of undergoing desired flexible re-orchestrations via converging upon a novel architectural proposition inclusive of modularization, cloud-based virtualization, software control via ‘command-driven reconfigurability’, and maintenance of a library for additional firmware modules (for each of the nodes at the physical level), among others. Other equally noteworthy and innovative contributions of this thesis pertain to outlining of a seemingly logical strategy for the sensor network ‘re-orchestration’ process (that spans across ‘three’ phases of, ‘Data Analysis and Event-Identification’, ‘Re-orchestration-Planning’ and ‘Re-orchestration-Execution’) as well as both determining and formulating a generic model for the latency associated with the same. The approach adopted herein is to allow for the underlying physical layer to undergo desired node and network-level (including topological) re-orchestrations (based on the outcomes derived from the cloud) in a flexible and expeditious manner during run-time through a ‘Command-driven’ re-configurability approach. This relatively simplistic yet expedient approach involves loading of a ‘unified firmware’ (i.e., one encompassing the requisite, ‘well-defined’ software modules) onto nodes (assumed to be capable of accommodating for and executing the corresponding functional roles owing to the enhanced capabilities ushered in by the advancements attained in the field of SoC and Embedded Systems technologies) to allow for conditional execution of the same remotely by means of ‘commands’. In order to augment the flexibilities that could be offloaded by the node over time based on the service requirements, a library of ‘reusable firmware modules’ (within which the requisite new functional modules could be integrated from time-to-time) could be maintained to be readily accessible by the main firmware. In regard to the above context, it is deemed worthy to reiterate that the thesis underscores the key prerequisites for the above prior to laying the concept in chapter four. Firstly, this includes identifying and clearly defining the core functional components (constituting any IoT-based sensor network organization viz., ‘leaf’, router and ‘Gateway’ functionalities) as ‘modules. The second prerequisite pertains to modularization of the core functional components that have been identified and defined. Virtualization of the core functional modules so identified and thereby the entire network (essentially, cloud-level Network Virtualization i.e., ‘NV’) that ‘logically’ (i.e., from a software standpoint) mimics the operational dynamics of the underlying physical network functions will form the third prerequisite. As alluded to earlier, the fourth prerequisite refers to the library of reusable ‘firmware modules’ at the node level (for augmented flexibility). The thesis is sectioned into seven different chapters, each accounting for a specific element of the overall work. The first chapter provides an overview of the various technological domains and aspects associated with this research work, whilst laying out the necessary background, vision and motivation behind the same. The second chapter accounts for a review of the existing literature pertaining to the various elements associated with this research viz., WSN virtualization, softwarization, re-orchestration and associated network downtime (as well as other architectural frameworks designed with relatively similar motives in mind). Information pertaining to the tools employed for virtualization and hardware implementation purposes are provided in the third chapter. As elaborated above, Chapter 4 firstly spells out the key perquisites for the proposed architecture prior to describing the same, along with its internal components. It then outlines the strategy adopted for the re-orchestration process, including formulation of a generic model for the latency that the network may experience as a result of the same. By means of certain pertinent example cases of software-defined sensor network re-orchestrations, chapter 5 details the specifics of both virtual and physical implementations, conducted via utilizing the Contiki-oriented virtual platform of the Cooja simulator as well as the Contiki-ported Texas Instruments CC2538 wireless transceivers respectively. It also brings to the fore the practicability of employing Contiki as a tool for software development that allows for precise replication of the codes employed for physical motes at the virtual level, whilst leveraging on the same to better analyse and conduct more accurate performance evaluations pertaining to the re-orchestration process. As a means to demonstrate the workability of the proposed concept with respect to a real-life scenario, chapter 6 deals with the use case pertaining to forest fire monitoring wherein dynamic re-orchestration of sensor network so deployed could significantly aid (pre-emptive) re-routing of network dataflow and/or maintenance of network connectivity in the event of network fragmentation emanating out of rapidly spreading uncontained fire outbreaks. Chapter 7 puts forth the conclusion of this thesis work, along with the future course of work to be undertaken.
- ItemMicrowave sensing for non-destructive evaluation of anisotropic materials with application in wood industry(Auckland University of Technology, 2012) Bogosanovic, Mirjana; Al-Anbuky, Adnan; Emms, GrantMicrowave non-destructive testing of wood is an active research field, but, despite remarkable advances reported in the literature to date, the wood testing devices are not widely implemented in industry. This thesis aims to progress the knowledge on wood testing by investigating two of the key issues: microwave propagation through dried wood and sensor design. Two microwave antennas with focused beam are designed and implemented. First antenna is a commonly used horn with a dielectric lens, offering a broadband solution, operating over the 8 to 12.4 GHz frequency band. The second solution is a novel metal plate lens antenna with beam forming in the near field zone. A successful beam forming and focusing is achieved, but a narrowband characteristic prevented application of this sensor for microwave wood testing considered in this thesis. A microwave system for a free-space measurement of wood properties is, in its various forms, applied to measurement of wood properties, considering wood as an anisotropic, heterogeneous and multiphase dielectric. Microwave free-space transmission measurement methods are considered, analysing error sources and available mitigation techniques. A focused-beam transmission measurement setup with free-space calibration has been identified as an optimum solution for microwave wood testing. The properties of this measurement system are analysed, having in mind its application for wood measurement in industrial environment. The samples for the study are carefully chosen to cover a range of features frequently met in practice. The ‘actual’ sample properties, against which the performance microwave measurements are judged, are determined using visual inspection and CT scan. The theoretical background on electromagnetic wave propagation through anisotropic media is considered. Of particular interest is depolarisation of a linear plane wave in anisotropic media, which is also demonstrated experimentally. A simple case of grain inclination in a plane is considered first, demonstrating experimentally that grain inclination directly relates to the level of depolarisation. This is then applied to a general case, in which the grain is inclined in three-dimensional space. It is shown that the technique has a good correlation with visually inspected grain angle values, but additional sensor calibration is recommended. Heterogeneity of the sample is analysed using the same set of sensors, but in different arrangement. The aim was to detect variations in wood structure and investigate a method for automated categorisation of wood samples, based on the type of defect. The categorisation of samples is considered as a way to combat a great variability in sample properties and allow easier and more accurate empirical modelling. The microwave transmission measurement data are compared with CT scans and visual inspection of samples. Good results are achieved, not only for samples with distinctive defects such as knots, but for samples with needle flecks, resin pockets and change in annual ring arrangement along the axial direction. Heterogeneity study is then extended to include an analysis of effects which gradual variations in wood structure have on the measured microwave signal. The obtained results show that phase of the microwave transmission coefficient can be used as a good indicator of slow variation in sample density. The study also includes an analysis of free space calibration and broadband transmission measurement, investigating its positive sides such as improved accuracy, as well as its negative sides such as complexity which these procedures introduce in an industrial process. Techniques for combating residual error are investigated, offering the frequency averaging as an easily implemented option. The importance of working over a frequency bandwidth is demonstrated, for dealing with phase periodicity as well as combating measurement uncertainty. Response calibration is considered as an affordable option which can remove some of the systematic errors, yet is less disruptive for the industrial process. Furthermore, both moisture content and density distribution are considered, as well as bulk properties, averaged over the whole sample volume. It has been demonstrated that both moisture and density of wood contribute to the changes in microwave transmission coefficient. Measured data reveal a polarisation dependence of the moisture related transmission magnitude, which may be used as additional information in attempt to distinguish between the contributors. This was further investigated on the set of samples observed at several moisture content values. The correlation between bulk density and microwave measured density improves when samples with knots are omitted, demonstrating advantage of sample categorisation. In the final section of the thesis, the scattering experiment is performed, measuring the transmission through the wood when transmitting and the receiving antenna axes are at the right angle. This experiment shows that maximum transmission in this direction correlates best with the arrangement of annual rings in the sample, indicating possible existence of guided modes in the layered media. This finding is significant as it demonstrates the complexity of microwave propagation model for the sample with such complex structure.
- ItemMobile phone based remote monitoring system(Auckland University of Technology, 2008) Liu, Danyi; Al-Anbuky, AdnanThis thesis investigates embedded databases and graphical interfaces for the MicroBaseJ project. The project aim is the development of an integrated database and GUI user interface for a typical 3G, or 2.5G, mobile phone with Java MIDP2 capability. This includes methods for data acquisition, mobile data and information communication, data management, and remote user interface. Support of phone delivered informatics will require integrated server and networking infrastructure research and development to support effective and timely delivery of data for incorporation in mobile device-based informatics applications. A key research and development (R&D) challenge is to support effective and timely delivery of data for incorporation in mobile device-based informatics applications. Another important aspect of the project is determining how to develop efficient graphics for the small mobile screen. The research investigates and analyses the architecture of a mobile monitoring system. The project developed a generic solution that can be implemented in a number of commercial sectors, such as horticulture, building management and pollution/water management. The developed concept is tested using data relevant to the horticultural area of application. The system also addresses the main issues related to mobile monitoring, including real-time response, data integrity, solution cost, graphical presentation, and persistent storage capabilities of modern mobile devices. Four embedded databases based on J2ME have been investigated. Two of the four have been evaluated and analysed. The Insert function, Sequence Search, and Random Search of Perst List and RMS (Record Management System) databases have been tested. The size of the processed data was limited to 20,000 records when using the wireless toolkit simulator, and 11,000 records when using a mobile phone. Perst Lite reflects good performance and has out-performed RMS in all tests. User interface software such as J2ME Polish for mobile phones has been investigated. Custom J2ME class for graphical interface is developed. This provides the graphical presentation of the data collected from the sensors; including temperature, wind speed, wind direction, moisture, and leaf wetness. The graphical interface, bar charts, and line charts with trace ball for collected data have been designed and implemented. The embedded database performance and project performance have been investigated and analysed. The performances of Perst Lite and RMS are evaluated in terms of the insert, sequence search, and random search functions based on simulation and real devices. The record numbers vary from 1,000 to 20,000. The project performance contains data receiving and storage, and data presentation and configuration. The performance of data storage and configuration can be negated due to the running mode and the response time. Thus, data presenting performance is the key focus in this project. This performance was divided into the categories of initial, data search, data selection, and charting. The initial performance includes the initialisation of the project parameters, and the reaching of the welcome interface. Data search performance refers to the retrieval of the specified data from the embedded database, measured on 48 data points, which only can be presented on the mobile screen from the retrieved data. These four performance types are measured in thousands of record numbers, varying from 1,000 to 18,000 record numbers, with the retrieved data range varying from 1 day to 30 days.
- ItemMotors fault recognition using distributed current signature analysis(Auckland University of Technology, 2012) Gheitasi, Alireza; Al-Anbuky, Adnan; Tek, Tjing LieImmediate detection and diagnosis of existing faults and faulty behaviour of electrical motors using electrical signals is one of the important interests of the power industry. Motor current signature analysis is a modern approach to diagnose faults of induction motors. This thesis investigates the significance of propagated fault signatures through distributed power systems, aiming at explaining and quantifying different observations of faults signals and hence diagnoses machine faults with a higher accuracy. Electrical indicators of faults, unlike other fault indicators, (e.g. vibration signals), propagate all over the network. Therefore fault signals may be manipulated by operation of neighbouring motors and the system‘s environmental noise. Both simulation and practical results clearly demonstrate the signal interference and hence confusion in diagnosis due to presence of a faulty motor nearby. Thus a knowledge based system is necessary to understand the meaning of the signals manifested at various parts of the distributed power system. On another side, taking into account that fault signals are travelling all over the network, several observations can be made for events in the network. In this thesis the idea of cross evaluation of fault signals considering signal propagation will be discussed and analysed. The research attempts to improve diagnosis reliability with a simple and viable framework of decision making. The thesis scope is limited to monitoring behaviour of induction motors in distributed power systems. These types of electrical motors are the main load of most industries. In this thesis, existing formulations of fault signatures would not be significantly disturbed, as distributed diagnosis can fit into an existing framework of current signature analysis. The research takes advantage of multiple areas of study to formulate propagation of fault signals while they are travelling in a scaled down distributed power system. At the beginning, a systematic approach has been employed to estimate influence of fault signals in currents of neighbouring electrical motors. Further analysis in attenuation of electrical signals leads to a technical framework that evaluates propagation of fault signals in power networks. The framework has been developed to estimate origin of fault signal by employing propagation patterns and estimating anticipated fault representatives around the network. An analytical process has been proposed to take advantage of multiple observations in order to diagnose the type and identify origin of fault signals. This can help maximize the number of independent observations and thus improve the accuracy of traditional approaches to current signature analysis. In general, this provides a better monitoring of behaviour of electrical motors at a given site. A rewarding system has been used to identify and track the signals caused by motors and quantify association of current signals with known industrial faults. An example of a scaled down distributed power system has been simulated to describe behaviour of distributed power systems with faulty components. The simulation model is carefully compared with the practical results to validate the simulation results thoroughly. Type and strength of faults and size, speed, load and placement of electrical motors are acting variables in propagation patterns of fault signals. These variables have been simulated in a scaled down industrial power network to examine distributed diagnosis in the new environment. In addition a number of scaled-down experiments have been employed to verify results of simulation models and confirm the accuracy of results. Analytical results demonstrate significant improvement in describing interference amongst electrical motors that work together in an electrical network. This leads to a simple strategy for identifying the ownership of fault signals and hence having more accurate diagnostic results. Further developments in modelling the propagation of fault indicators emerged for improving the reliability and efficiency of fault diagnosis in industrial systems. On the other hand, a number of shortcomings have been observed in implementing strategy of distributed diagnosis including confusion among many similar faults in the power network and malfunctioning of the diagnosis system due to non-linear interferences of noise signals. Some of these problems are believed to be solvable by using a proper numerical solution (e.g. Artificial Neural Network, Bayesian, etc.) to process fault indices and propagation patterns before and after occurrence of each fault. In conclusion, the thesis does not claim to provide a complete solution of fault diagnosis in electrical motors. But it is an attempt to provide a more dependable industry solution for fault diagnosis in induction motors. Distributed diagnosis is a framework which takes advantage of multiple observations of a single fault and hence it is dependent on quality of acquired signals among individual observations.
- ItemNetwork trustworthiness evaluation in P2P networks(Auckland University of Technology, 2018) Xiang, Ming; Liu, William; Bai, Quan; Al-Anbuky, AdnanTrust and reputation management emerges as a significant research trend, in term of soft security to tackle the security issues in computer networks. It is different from the traditional security mechanisms such as cryptography that is described as hard security. The basic idea is that every entity in the network, as an individual, can rate each other based on previous experiences. This rating on trust can assist other machines in deciding whether to collaborate with that machine in the future. Recently there has been a rapid increase in literature on trust and reputation management that mainly focuses on algorithmically modelling and evaluating the trust to effectively detect and avoid various malicious attacks. These trust algorithms can isolate the malicious entities from the local trust aspect. While the concept of trust in the computer network is derived from the sociology, and in sociology, it is defined as the belief that trustees will have a positive expectation of intention and behaviours. Moreover, the trustee at different positions will behave differently, such as at the Structural Hole or the position surrounded with Simmelian Ties. Do these position-based phenomena also exist in computer networks? In other words, in computer networks, is the location of a node will affect its behaviours, especially in the emerging peer-to-peer (P2P) network architecture? Motivated by above research questions, in this thesis, we have focused on studying how the underlying network topological connectivity can affect the overlay trust behaviours from the global network perspective. This thesis has four main contributions. Firstly, we have revealed the underlying topology impact on the overlay trust behaviours in P2P networks. We have confirmed the correlations between the topological structures of Simmelian Ties and Structural Hole, and the node trustworthiness behaviours. Secondly, we have defined a new term of network trustworthiness to describe the trust level on a network topology. This is followed by introducing the Network Trustworthiness as a Service (NTaaS) concept, which can be adapted to accommodate the different levels of trust service demands from the users. Thirdly, we have proposed the $T$ value and Trustworthiness Tolerance Margin (MTT) based evaluation framework to evaluate the trustworthiness of the network topologies from the global aspect. Lastly, we have proposed a mathematical approach to optimise the network topology by adding a link in the most critical position so that the underlying network structures can best resist various unwanted behaviours and network failures.
- ItemObject-centric Intelligence: Sensor Network and Thermal Mapping(Auckland University of Technology, 2013) Yamani, Naresh; Al-Anbuky, Adnan; Daly, ClydeQuality of product is an important aspect in many commercial organizations where storage and shipment practices are required. Temperature is one of the main parameters that influence quality and temperature treatments of agricultural products therefore require special attention. The temperature variation in a meat chiller has a significant effect on tenderness, color and microbial status of the meat, therefore thermal mapping during the chilling process and during chilled shipment to overseas markets is vital. The literature indicates that deviations of only a few degrees can lead to significant product deterioration. There are several existing methods for thermal mapping: these includes Computational Fluid Dynamics (CFD), Finite Element Methods (FEM) for examination of the environmental variables in the chiller. These methodologies can work effectively in non real-time. However these methods are quite complex and need high computational overhead when it comes to hard real-time analysis within the context of the process dynamics. The focus of this research work is to develop a method and system towards building an object-centric environment monitoring using collaborative efforts of both wireless sensor networks and artificial neural networks for spatial thermal mapping. Thermal tracking of an object placed anywhere within a predefined space is one of the main objectives here. Sensing data is gathered from restricted sensing points and used for training the Neural Network on the spatial distribution of the temperature at a given time. The solution is based on the development of a generic module that could be used as a basic building block for larger spaces. The Artificial Neural Networks (ANNs) perform dynamic learning using the data it collects from the various sensing points within the specific subspace module. The ANN could then be used to facilitate mapping of any other point in the related sub-space. The distribution of the sensors (nodes placement strategy for better coverage) is used as a parameter for evaluating the ability to predict the temperature at any point within the space. This research work exploits the neuro Wireless Sensor Network (nWSN) architecture in steady-state and transient environments. A conceptual model has been designed and built in a simulation environment and also experiments conducted using a test-bed. A Shepard’s algorithm with modified Euclidian distance is used for comparison with an adaptive neural network solution. An algorithm is developed to divide the overall space into subspaces covered by clusters of neighbouring sensing nodes to identify the thermal profiles. Using this approach, a buffering and Query based nWSN Data Processing (QnDP) algorithm is proposed to fulfil the data synchronization. A case study on the meat plants cool storage has been undertaken to demonstrate the best layout and location identification of the sensing nodes that can be attached to the carcasses to record thermal behavior. This research work assessed the viability of using nWSN architecture. It found that the Mean Absolute Error (MAE) at the infrastructural nodes has a variation of less than 0.5C. The resulting MAE is effective when nWSN can be capable of generating similar applications of predictions.
- ItempH Wireless Sensor Network for the meat tenderizing process(Auckland University of Technology, 2010) Devan, Vashu; Al-Anbuky, AdnanThe wireless sensor network is a paradigm shift from the conventional wired system and has made remarkable progress in the last ten years. The system is cost effective, efficient, and user friendly as there is no need for external cables to interconnect devices. There are significant opportunities widely available to assess existing wired systems, and with thorough feasibility studies, most of these could be easily converted into wireless systems. A conceptual pH Wireless Sensor Network based on decentralized architectural paradigm is proposed in this thesis to introduce wireless connectivity and enhance system characteristics of a wired meat tenderising system. The network consists of pH Sensor Nodes and Stimuli Actuator Nodes. The focus of this thesis is the architectural design of these nodes and development of prototypes. Carcass pH is determined non-intrusively using a proprietary pH analysis alogrithm and process. This method enables pH analysis of carcasses in a meat plant without stopping the conveyor. The basis of the design is distributed processing and the collaborative nature of a Wireless Sensor Network. This showed that a network of sensor/actuator nodes could replace the existing wired meat tenderizing system and effectively handle the meat tenderizing process. The key benefit anticipated from the proposed wireless network node architecture in this thesis, is an intelligent re-configurable system that is compact, modular, cheaper and easier to install. The need for precise and consistent results creates an opportunity for further improvements to signature (spectrum of carcass response to stimuli) sensing and signature analysis algorithm. There is also scope for adding intelligence to the actuator nodes to aid in developing a fault tolerant system with a failsafe mode. While this project is a miniaturised version of real time process control, future studies could target replacement of wired industrial process control entirely with wireless sensor networks. The objectives of the project were met following the set up of the ZigBig network to simulate meat tenderizing process control, and design of the sensor node and actuator node architecture. A set of standard tools were also determined as part of the project, and are readily available in the market. The major achievement of the project was the development of sensor node and actuator node prototypes, consistent with the expectations of the sponsors and handed over to Merit of Measurement, Auckland.
- ItemPost-Operative Hip Fracture Rehabilitation Activity Movement Monitoring(Auckland University of Technology, 2022) Gupta, Akash; Al-Anbuky, Adnan; McNair, PeterHip fracture incidence is a life-threatening event that increases with age and is common among the older population. It causes significant problems as there is an increased risk of mortality, restriction of movement and well-being, loss of independence, and other adverse health-related concerns for the injured. Following surgery, physiotherapy is essential for strengthening muscles, mobilizing joints, and fostering the return to regular physical activity. Ideally, appropriate rehabilitation with a set programme performed under a predefined supervised and unsupervised environment can play a significant role in recovering the person’s physical mobility, boosting their quality of life, reducing adverse clinical outcomes, and shortening hospital stays. Tracking, recording, and continuous real-time monitoring of activity movements can significantly help in following up the correct implementation of a predefined programme. The ever-increasing technology such as the Internet of Things (IoT), which produces advancements in digital health revolutionizing industries and markets could be useful in advancing conventional rehabilitation care. This will also aid in enhancing backup intelligence used in the rehabilitation process, and will provide transparent coordination and information about activity movements among relevant parties. This thesis provides a motivational background for the problem and a critical literary analysis of the key components involved in structuring an IoT-based rehabilitation care monitoring system. The thesis proposes and presents a post-operative hip fracture rehabilitation model from the existing rules and health programmes. The model reflects the key stages a patient undergoes straight after hospitalisation, and provides clarification for the involved rehabilitation process, its associated events, and the main physical movements of interest across all stages of care. Considering the model monitoring requirements, the thesis highlights the system modelling and development tools for testing the proof-of-concept and overall conceptual ideology. To support this model, the thesis proposes an IoT-enabled wearable movement monitoring system architecture. The architecture reflects the key operational functionalities required to monitor patients in real-time and throughout the rehabilitation process. The conceptual ideology was tested incrementally on ten young and healthy subjects, for factors relevant to the recognition and tracking of movements of interest. The analysis reflects the recognition of the hip fracture rehabilitation activity movements based on frequency-domain analysis and concerning sensor localisation. Research findings suggested that the amplitude parameter was suitable for the classification of the static state of a patient and ambulatory activities. Whereas, for the hip fracture related movements, both the frequency content and related amplitude of the acceleration signal play a significant role. From the analysis, the ankle is considered to be an appropriate sensor location that can categorise the majority of the activity movements thought to be important during the rehabilitation programme and data collection time of four seconds is considered to be the minimum time for recognising a particular activity movement without any loss of information or signal distortion. Furthermore, the thesis presents the importance of personalisation and one-minute history of data in improving recognition accuracy and monitoring real-time behaviour. This thesis also looks at the impact of edge computing at the gateway and a wearable sensor edge on system performance. The approach provides a solution for an architecture that balances system performance with remote monitoring functional requirements. Finally, this thesis offers a clearly-defined structured rehabilitation follow-up programme use case and conclusion with an indication of our future research work.
- ItemRendezvous in cognitive radio ad-hoc networks(Auckland University of Technology, 2015) Hossain, Md Akbar; Sarkar, Nurul I; Al-Anbuky, AdnanCognitive radio (CR) is a promising technique to enhance the spectrum utilisation by enabling the CR users to opportunistic access the spectrum holes or channels. CR ad hoc network is a multi-channel environment where channel status changes over time depending on primary users’ (PUs) activities. Analogous to control channel establishment in traditional multi-channel ad hoc network, rendezvous in CR ad hoc network is one of the most important processes for a pair of unknown CR users to initiate communication. Most of the existing research have utilised a common control channel to achieve rendezvous. This utilization generates channel saturation, extreme transmission overhead of control information, and a point of vulnerability. The traditional designs for rendezvous protocols do not support an ad-hoc CR network model. Therefore, this thesis is focused on improving control channel establishment to solve the rendezvous problem and support further CR ad-hoc networks. This thesis proposes a new channel hopping (CH) scheme called extended torus quorum channel hoping (ETQCH) for asymmetric and asynchronous pair wise RDV in CR ad-hoc networks. The ETQCH employs channel ranking information for allocating more slots to high-rank channels than low-rank ones. The system dynamically updates the CH sequence by replacing channels from both the licensed and unlicensed bands to protect intermittent PUs. Channel hopping sequence scheme is a mathematical concept to guarantee overlap between two CR users. A successful RDV establishment depends on successful channel probe or control packet exchange which is a MAC layer issue. Therefore, a new MAC protocol named cognitive radio rendezvous (CR-RDV) MAC is proposed to facilitate the multiuser contention in CR ad-hoc networks. CR-RDV is developed by re-defining the traditional backooff procedure and incorporating a sensing period immediately after the request-to-send; the incumbent PU’s transmission is protected and blocking problems are resolved. The analysis and simulation results show the potential to minimise service interruption, block node problems, and efficiently utilise dynamic radio resources. The thesis also provides a guideline for CR system planners to design and deployment of dense networks with active PUs.
- ItemSensor Network Embedded Intelligence: human comfort ambient intelligence(Auckland University of Technology, 2013) Mohamed Rawi, Mohd Izani; Al-Anbuky, Adnan; Leardini, Paola MariaThis study explored the multidiscipline domain of the Wireless Sensor Network (WSN) and Ambient Intelligence (AmI) in addressing the problem of the comfort of a living space. This thesis addresses the potential for embedding an intelligent engine into WSN and the aggregation of multiple comfort factors in a living space. The four most important comfort factors for humans are taken into account. These are thermal comfort, visual comfort, indoor air comfort and acoustical comfort. This thesis introduces a WSN based embedded intelligent system architecture and a system framework for a living space’s comfort level. Human Comfort Ambient Intelligence System (HCAmI) architecture is presented. The HCAmI key component encompasses a flexible generic distributed fuzzy engine embedded within WSN nodes. The engine serves as a key knowledge component in solving specific human comfort requirements. With the proliferation of pervasive computing, there is an increasing demand for the inclusion of WSN in wider areas such as buildings, living space, system automation and much more. Focusing on buildings and living space alone, multitudinous studies have been made of environmental comfort for occupants. A smart environment and low energy homes are amongst the driving forces behind this research. Also, WSN research has been progressing well and expanding into various aspects of life such as support of the elderly, environmental senor, security and much more. Unfortunately, separate studies have been conducted in their own discipline focusing on specific issues and challenges. Little attention has been paid to putting it together under one roof. Lack of interdisciplinary research inspired this effort to unite these unconnected research domains. This has acted as the key motivational catalyst. The motivation behind combining these effects brings us to the specific issue of the human comfort realm that prompted this study. Human comfort deals with providing a comfortable and healthy place for people to live. Hence, in a living space, other than good design and construction, it is essential to monitor and maintain the modifiable environment such as temperature, lighting, humidity, noise, air quality and psychological factors. Functional environmental comfort system adaptability and the WSN system determination to solve the problem is a fascinating issue that certainly warranted further investigation. The HCAmI concept was designed and implemented based on a knowledge based architecture and framework. This approach addressed the component level first, catering for the four key human comfort factors. The system level design was then looked at. Each individual component was subjected to simulated and real sensor data and tested against a corresponding model built using appropriate tools such as the MATLAB Simulink and Sun SPOT Solarium WSN simulator. The HCAmI System was used to collect raw data from 20/04/2010 to 26/08/2010 (four months of data) in the SeNSe Laboratory, School of Engineering. A short snapshot of the collected data (from 08:00am 25/08/2010 to 11:40am 26/08/2010) is presented as a case study. The main achievement / contribution of this thesis is a distributed fuzzy logic based wireless sensor node in the human comfort realm. The framework, architecture and development of an integrated human comfort concept could be embedded in a wireless sensor network environment. The modular architecture and framework presented here highlights the flexibility and integrated approach of the design. The knowledge component of each comfort area can be changed easily and adding or removing comfort components is catered for as well. Overall, this thesis adds to the WSN body of knowledge in an embedded distributed generic fuzzy engine, thermal comfort engine, spatial sensing engine, human comfort index engine, application layer communication protocol and specific external sensor driver development and interface for Sun SPOT WSN.