We consider the problem of energy-efficient broadcasting on large ad-hoc networks. Ad-hoc networks are generally modelled using random geometric graphs (RGGs). Here, nodes are deployed uniformly in a square area around the origin, and any two nodes which are within Euclidean distance of 1 are assumed to be able to receive each other’s broadcast. A source node at the origin encodes data packets of information into coded packets and transmits them to all its one-hop neighbours. The encoding is such that, any node that receives at least out of the coded packets can retrieve the original data packets. Every other node in the network follows a probabilistic forwarding protocol; upon reception of a previously unreceived packet, the node forwards it with probability and does nothing with probability . We are interested in the minimum forwarding probability which ensures that a large fraction of nodes can decode the information from the source. We deem this a near-broadcast. The performance metric of interest is the expected total number of transmissions at this minimum forwarding probability, where the expectation is over both the forwarding protocol as well as the realization of the RGG. In comparison to probabilistic forwarding with no coding, our treatment of the problem indicates that, with a judicious choice of , it is possible to reduce the expected total number of transmissions while ensuring a near-broadcast.
Enterprise systems routinely use tiered storage consisting of a hierarchy of storage devices that vary in speed and size. One key to obtaining good performance in such a hierarchy is to migrate data elements intelligently to the appropriate tier. For example, moving the most used data towards the fastest tier and the least used data towards the slowest tier. Tiering is typically done based on usage statistics over relatively long time periods. In this paper, we consider a much more agile tiering mechanism called Adaptive Intelligent Tiering (AIT). It can dynamically adapt to the changing behavior of storage accesses by the running applications. The AIT mechanism uses a deep learning model to generate a set of candidate movements and employs a reinforcement learning mechanism to further refine the candidates. Based on extensive simulations in a 3-tier system, we show that the proposed scheme, compared with several other methods, enhances workload performance up to 85% on storage traces with a wide range of characteristics.