Objectives

 “TACTILENet: Towards Agile, effiCient, auTonomous and massIvely LargE Network of things

We aim to address the following challenges in the scope of this project:

•   One of the most crucial challenges is the physical scarcity of radio frequency (RF) spectra allocated for cellular communications.
  Another challenge is that the deployment of advanced wireless technologies comes at the cost of high-energy consumption.
  Third, users nowadays expect higher quality in several facets of the service they receive, e.g., usability, availability, connection loss, and integrity of the service.
  Finally, the massive number of IoT devices will challenge the networks to manage an unprecedented number of connections, where each connected machine or sensor transmits small data blocks sporadically.

TACTILENet has a pragmatic approach to the design of future communication networks. Instead of aiming to design the highest capacity, highest reliability communication network, we intend to design the network architecture that can best adapt to the QoE requirements and design constraints for a specific scenario. This might mean a different operation mode depending on the rate, reliability and latency requirements of the underlying applications, as well as the available spectral and spatial resources, or the reliability of the underlying energy sources. The ultimate goal will be to enable the coexistence of all these multiple modes of operation in a seamless manner.

TACTILENet will address these challenges focusing on the following research pillars:

•  Network densification and cloud-RAN: Network densification is a combination of spatial densification and spectral aggregation. Spatial densification is realized by increasing the number of antennas per node (user device and base station), and increasing the density of base stations deployed in the given geographic area, while ensuring nearly uniform distribution of users among all base stations. Spatial aggregation refers to using larger amounts of electromagnetic spectrum, spanning all the way from 500 MHz into the millimeter wave (mmWave) bands (30–300 GHz).  

While network densification is seen as a key technology enabler for 5G networks, it brings along its own challenges, such as increasing demand for coordination among cells, and high-capacity backhaul links. Cloud-RAN (C-RAN) architecture can resolve these problems by allowing centralized baseband processing; which also reduces the infrastructure cost and energy consumption by removing and/ or suppressing the baseband units at some of the access points (a.k.a. remote radio heads). The centralization of information processing enabled by C-RANs allows effective interference management within the area covered by the remote radio heads. This in turn promises to be a key component of the solution to the so called “spectrum crunch” problem, which is currently caused by the wireless interference due to the ever increasing number of mobile users. The main tenet of C-RANs is the separation of radio transmission/reception, which is carried out at the radio units, and information processing, which is carried out at the central units within the “cloud”. Within this basic paradigm, there are unexplored degrees of freedom on the demarcation between the two functionalities. In particular, as said, the processing of the informative (data) portion of the wireless transmissions, in the form of encoding and decoding, falls in the domain of the  operation at the cloud. However, the processing of the overhead associated with the wireless transmissions for the purposes of synchronization and channel state acquisition may be profitably decentralized at the radio units, or shared between radio units and control units in the cloud. While the state of the art assumes both data and overhead processing to be in the domain of the cloud nodes, a full investigation of the potential of the C-RAN technology must leverage the added flexibility in the allocation of these two functionalities within the C-RAN architecture. The main roadblock to the realization of the mentioned promises of C-RANs hinges on the effective integration of the wireless interface provided by the remote radio heads with the backhaul network that links the radio units and information processing nodes within the cloud. However, this requires high-capacity backhaul links and introduces extra latency, which requires rethinking of the ways in which wireless access protocols are designed. In most urban environments, it is either expensive or impossible to install a fiber backhaul link to each access point; and therefore, mmWave backhaul connections are seen as the only viable option. On the other hand, sustaining a high-quality C-RAN architecture over wireless backhaul links requires advanced coding and communication techniques in order to fully exploit the available limited backhaul resources.

•  Energy harvesting and green communications. Energy efficiency has always been at the center of wireless system design; however, 5G networks will exacerbate the concerns about energy at various levels. First, with network densification and the increasing number of devices and access points, the overall energy consumption of wireless systems is becoming ever higher, thereby increasing the environmental footprint. Second, the increasing demand for high data rate applications and services has multiplied the energy consumption of mobile devices. Today, most smart phones and tablet devices can be called “wireless” in a limited sense, as they have to be wired to the power socket most of the time. Hence, there is an increasing pressure for energy efficient communications and networking techniques at both the device and the network level.

IoT introduces yet another challenge for energy efficiency. Most IoT devices are limited in cost and size, which puts significant constraints on the battery size and capacity. While charging the battery of a smart phone every day is feasible, replacing or recharging batteries of tens or hundreds of IoT devices every single day is out of question. Therefore, a promising solution to power future IoT devices is to harvest available ambient energy [Gunduz14]. The ultimate promise of energy harvesting (EH) is a self-sustainable, maintenance-free network of perpetually communicating devices. With this promise comes a fundamental shift in design principles compared to traditional battery-operated systems: whereas minimizing energy consumption is crucial to prolong network lifetime in the latter, in EH networks the objective is the intelligent management of the harvested energy to ensure long-term, uninterrupted operation. In TACTILENet we will combine advanced optimization and learning algorithms to dynamically adapt the communication protocols to the state of the energy harvesting processes and the battery state of the devices.

Provision of end-to-end QoE. The success of many mobile services derives particularly from a usercentered approach, aimed at designing the whole process of content production, service activation, content consumption, service management and updating. From these considerations it follows that the management of Quality of Experience (QoE) is undoubtedly a crucial dimension for the deployment of successful future services. While QoE is straightforward to understand, it is extremely complex to implement in real systems, since many interdependent variables affect the QoE, spanning multidisciplinary areas including multimedia signal processing, communications, computer networking, economics, psychology and sociology. Similarly, IoT communications is typically a component of a distributed control system, where sensors and actuators communicate for coordination and control purposes. This imposes further constraints on latency and reliability, but more importantly, communication rate is not the appropriate performance metric any more; and the system has to be designed to minimize an end-to-end performance metric (that is, a QoE for machine-type applications), such as the end-to-end quality of the estimated sensor measurements, or the success of the underlying control goal. Under such end-to-end performance metrics, which involve a number of system parameters, the highly structured and layered network architectures we have today are extremely suboptimal, and completely novel communication and networking algorithms are needed. 

Cross-layer techniques for massively many low bandwidth machinetype communications (MTC). There will be many scenarios in 5G networks where the data size of each individual transmission is small, going down to several bytes. Under such extremely low data payloads, the cost of sending metadata (control information) in the packets becomes very significant. This calls for the revision of the basic principles that are used to packetize the data and accounting for the resources used for overhead (channel estimation and metadata). Furthermore, some of the emerging MTC applications, such as industrial automation, require highly reliable transmission with very stringent latency constraints. This requires the information-theoretic machinery for transmission of short packets as a benchmark to see what kind of reliability/latency guarantees is possible to provide. Methodologically, there is a need for optimized design that uses much tighter coupling between the data and control planes.