這是用戶在 2025-1-9 16:31 為 https://ieeexplore.ieee.org/document/9595624 保存的雙語快照頁面,由 沉浸式翻譯 提供雙語支持。了解如何保存?
A Context-based Sensed Data Search on Edge Computing for Finding Moving People | IEEE Conference Publication | IEEE Xplore

A Context-based Sensed Data Search on Edge Computing for Finding Moving People

Publisher: IEEE

Abstract:

This paper proposes a novel context-based search method on edge computing for finding data of moving people. As the Internet of Things (IoT) spreads, many sensors are goi...View more

Abstract:

This paper proposes a novel context-based search method on edge computing for finding data of moving people. As the Internet of Things (IoT) spreads, many sensors are going to be connected to a wide area network. To utilize sensed data for various services, they first need to be searched for. Different from term-based web content search, sensed data require context to be derived by heavy load processing such as image analysis from enormous data. Therefore, it incurs a huge amount of time and cost. Time and cost can be reduced by estimating the edge server that stores the required data. This paper proposes a novel context-based search method that selects data to be analyzed on the basis of the existence probability of context in accordance with the installation area of edge servers and the data generation time. Also, it selects a server with low load to reduce response time. Since the proposed method incorporates not only low layer information such as computer load but also high layer information such as the existence probability of context in the data, the method enables context-based search for sensed data quickly with low cost. Simulations showed that the proposed method reduces the response time by 95% compared with selecting data without using the existence probability of context.
Date of Conference: 14 June 2021 - 31 July 2021
Date Added to IEEE Xplore: 09 November 2021
ISBN Information:
Publisher: IEEE
Conference Location: New Orleans, LA, USA

SECTION I.

Introduction

As the Internet of Things (IoT) spreads, networked devices in our living environment are increasing and are predicted to number 29.3 billion by 2023 [1]. Also, information is collected from environments by various sensors distributed in wide area networks. We expect that sensed data will be openly exposed to a large number of services and users via the Internet like web pages. Since much more sensed data will be available than in existing systems, an important function will be to find the data on the basis of their context. Context is information that has meaningful characteristics such as temperature exceeding a specific threshold value or a specific object in the image. In IoT, most service providers and users require context, not the sensor itself. Searching data or devices with context is called context-based search [2]. Context-based search is necessary for searching various formatted data generated in real-time by various devices. However, it incurs higher computing and networking costs than web content search. The general web content search is a terms-based search using indexes for the words contained in the content [3] [4]. On the other hand, to handle sensed data, the context of data in various formats needs to be derived without being limited to terms [5]. For example, to find a camera’s images containing a suspicious person, data need to be analyzed by image analysis software such as OpenCV [6] or Yolo [7]. However, such software has heavy load processing.

Moreover, some sensors generate sensed data frequently. Common IP-based cameras capture video on 1920×1080 resolution and 30 frames per second [8]. Deriving the context from all the data incurs huge time and cost. Furthermore, when using cloud computing resources, transferring data from sensors to the computer takes a long time and incurs a large network cost.

Edge computing [9] is an effective architecture that solves the network problems in context-based search. Figure 1 shows an overview of a context-based search on edge servers. A cloud server receives the search request with search parameters and also manages edge servers. Edge servers distributed in the network collect data generated by nearby sensors. Also, edge servers analyze data to derive their context when the servers receive a data search request from the cloud server. Since only the context is sent to the network between the edge server and the cloud, time and network costs are small. There is another advantage of adopting edge computing. Since edge servers store data from their closest sensors, this enables data to be managed by area and area characteristics of data to be used for searching. Many types of sensed data have regional characteristics. For example, cars are likely to be shot by cameras on roads, and the wildlife is only shot by cameras in each habitat. If context is derived only on the edge server in the area where the required data are generated, computing costs will be lower.

However, most context-based searches such as that of a camera image of a specific person do not know which edge server stores the required data. Thus, the server that stores the required data needs to be estimated without explicit information.

Fig. 1. - Overview of context-based search for sensed data on edge computing.
Fig. 1.

Overview of context-based search for sensed data on edge computing.

This paper proposes a method of allocating a context-based search request to edge servers for minimizing search time with minimum computing cost. The method is for searching for a specific moving person, which is a typical context-based search request. The method is based on the below.

  • To reduce the average response time for all requests, the load on edge servers has to balance. Unlike the cloud, the edge server is deployed in separate locations, so when requests are concentrated at a particular location, it overloads edge servers and increases response time.

  • To estimate the edge server that stores the required data, we considered spatiotemporal continuity of the sensed data related to a moving person. We formulated the existence probability of the context in data stored by the edge server.

The contribution of the paper is proposing a novel context-based search method on edge computing that incorporates low layer information such as computer load and high layer information such as the probability of containing the context. The proposed method is versatile because it does not depend on a specific service or data format.

The rest of the paper is organized as follows. Section II surveys the related work. Section III describes the proposed context-based search method for finding a specific moving person. Section IV presents the simulator and experimental results. Section V discusses benefits and challenges for applying the method to real networks. Finally, Section VI concludes the paper.

SECTION II.

Related Work

Our research is related to context-based search and distributed processing using computers in wide area networks.

First, we describe prior work related to the context-based search. COBASEN, proposed by Lunardi et al. [2], is a software framework that enables devices to be searched for on the basis of attributes such as device location and purpose. It consists of two modules: Context Module, which extracts the context from data and generates an index, and Search Engine, which searches for devices corresponding to the search query on the basis of the Index. Although COBASEN is useful for the context-based search, Lunardi et al. did not mention the specific processing of the Context Module, and we regard this load as a problem. Elahi et al. [10] proposed a method of ranking sensors for searching for them efficiently like web content. This method defines the relations of sensors and their context and then predicts the sensors that match the search query on the basis of the sensors’ output periodicity. Although the method effectively searches for sensors, the paper does not mention the process of extracting context from sensor output, and statistical observation for a certain period of time is required to make the prediction model of the sensors. The method cannot handle various context-based search parameters that are generated dynamically.

Next, we describe related work on distributed data processing in edge computing. There are many studies for application allocation to computers distributed in a network [11]. Alicherry and Lakshman [12] proposed a resource allocation algorithm over a wide area network that minimizes communication costs and delays. In response to a user’s request to deploy virtual machines (VMs), the algorithm selects a neighboring data center when a single data center does not have enough capacity. The algorithm takes into account the data center VM capacity and network paths but does not consider optimization for handling data from users. Cheng et al. [13] proposed a virtual network (VN) embedding method that considers network topology as well as central processing unit (CPU) capacity and bandwidth. This method reflects the quality of resources and connections by using the theory of PageRank used by Google’s search engine. It is effective in terms of long-term revenue and the acceptance ratio of requests. However, it does not take into account the context or source of the data. Chowdhury et al. [14] proposed a VN embedding platform that handles multiple infrastructure providers (InPS). It can embed VNs in appropriate InPS considering geographical locations by using a geographical address management scheme and protocol. They also proposed a method to share information such as the price of resources among InPS and a method of assigning processes to a large network including multiple administrators. However, these methods assume only location-aware requests. The context-based search has to handle requests that do not directly specify the location.

Furthermore, the application aware workload allocation (AREA) algorithm proposed by Fan and Ansari [15] decides workload assignments and optimal resource allocation among servers to minimize the total response time of various types of applications on the basis of edge computing. Although it handles diverse computing sizes and quality of service (QoS) requirements of different types of applications, it does not account for the data stored in the server. Context-based search needs to consider the heterogeneity of stored data for each server. Breitbach et al. [16] proposed a context-aware data and task placement in edge computing. With their Performance-aware Task Scheduling, the scheduler allocates tasks on the fastest idle device that holds a replica of required data. It effectively reduces the task’s turnaround times. However, the method assumes that edge servers that store required data are known. In our target scenario, the server storing required data is not known in advance. We take an approach of estimating the area where required data have been generated and select the data and edge servers on the basis of physical position.

In addition, we mention the current status of cloud services’ location management. Some cloud services have recently become able to consider computer locations. Amazon Elastic Compute Cloud [17] and Microsoft Azure [18] can specify a computer’s geographical location in units such as regions and availability zones. These can be specified by the user for purposes such as minimizing network latency ensuring redundancy against data center failure. In 2020, the world had fewer than 100 regions, not provided at the edge computing granularity. Also, users are not given the ability to handle data and processing efficiently across a large number of data centers.

From the above, prior context-based search cannot quickly find required data such as camera images of a specific person generated by many sensors in the wide area network. Moreover, prior distributed data processing using edge computers is not suitable for context-based search without specifying servers. Our approach finds required data quickly and at low computing cost by considering both low layer information such as CPU load and high layer information such as the probability of containing context.

SECTION III.

Proposed Context-Based Search Method

This section describes our context-based search method for finding a target person, which is a typical service that utilizes devices in a wide area network. In our scenario, the target person is, for example, a criminal on the run or a lost child. The goal is to find the data containing information about the target at a specific time from data generated by a camera in the city. Figure 2 shows the prerequisite of the scenario. A target moving person changes his/her position over time. The target person’s images are shot by cameras, and an edge sever in the same area stores data divided at predetermined time intervals. The data are stored with meta-information including a generated time on the premise that the positions of the edge servers are known by the system. The user of the search system requests data by context and their generated time. For example, context could be a specific person’s image, and generated time could be 1 hour before the current time. Then, the system searches required data by selecting data to be derived context.

A. Priority of data to derive context

Data to derive the context are prioritized on the basis of the characteristics of the data and the load on the computer where the data are stored. Priority is calculated as follows.

Pi=Tciki/(Ei+ωVi), iI(1)
View SourceRight-click on figure for MathML and additional features.

I denotes the all set of data in edge servers, and iI denotes one of them. Pi denotes the value of priority of iI, and a smaller value denotes higher priority. Our method selects the data with the smallest Pi.Tci is the default response time of iI, and ki is the number of concurrently executing processes on the edge storing iI. The server’s response time depends on the computer’s load. ki corresponds to the number of search requests. It lengthens the data response time. Although the actual response time lengthened by the number of concurrent processes depends on the computer’s architecture, we define it simply as a multiple of the number of concurrent processes.

Ei is the existence probability, Vi is the value of data of iI, and ω is the weight for determining each ratio. The existence probability means the probability of data containing a required context and is effective for prioritizing data to be analyzed. The data value is a gain obtained when the data including the target context is found. The required data are the most valuable and have the highest Vi. Other data are also valuable when finding them makes it possible to obtain a more accurate existence probability. The sum of Ei and Vi is used for deriving priority. ω is designed by the type of data and the scenario because the appropriate ω depends on them.

By integrating the characteristics of data(such as existence probability and value) and the number of concurrently executing requests on the edge server, Equation 1 selects the variable data without concentrating requests on a specific edge server. It reduces not only the average search time but also computing costs because an application ends its processing when the required data are found.

The formulas for calculating the existence probability and data value in the scenario of finding a specific moving person are presented in the next subsection. Furthermore, Algorithm 1 shows the whole process of the method.

Fig. 2. - Configuration of target person, edge servers, and stored data in the scenario.
Fig. 2.

Configuration of target person, edge servers, and stored data in the scenario.

B. Existence probability of context

The existence probability of context is calculated by Equation 2. It is based on a normal distribution with the last discovery point as the center.

Ei(μi)=12πσi2e((μιμL)22σi2)iI(2)
View SourceRight-click on figure for MathML and additional features.

μi denotes the position of the edge server with iI, and μL denotes the position of the edge server where the data of the target person was last found. σi denotes the variance of the normal distribution. Although the actual movement of a person is more complicated, we adopted this simple equation to emphasize the generality of the method. The equation reflects the physical time of movement: the further from the last discovery point, the less likely to arrive there. Furthermore, σi depends on the time and is determined as follows.

σi(Ti)=α(TiTL)+σD, iI(3)
View SourceRight-click on figure for MathML and additional features.

ti denotes generated time of data with iI, and TL denotes the last discovery time of the required context. α is the variance increase coefficient, and σD is the base value. Variance becomes larger as time elapses from the last discovery time. Since the appropriate values for α and σD differ depending on the speed of the person, the appropriate value depending on the service scenario has to be set.

C. Value of data

The value of data means the worth for the search request. In other words, it indicates how much the data contribute to find required data. It is defined as follows.

Vi(Ti)=1|(τRri)Ts|, iI(4)
View SourceRight-click on figure for MathML and additional features.

Vi is the value of data of iI,ti is the time when the data is generated and TR is the generated time of the required data. ti and TR are expressed as the elapsed time from the common reference time. Ts is the maximum time of the data that depends on storage size in the edge server. The equation shows that the data at the required time have the highest value, and the value decreases linearly as the gap of time between required data and selected data increases. This is because the data farther from the required time have less effect for updating the existence probability of context for the required data finding, and multiple analyses will be performed until the required data are found.

Algorithm 1: Selection of data to derive the context

I: Sets of data to be searched

i: A set of data included in I

The formulas for calculating Pi,Ei, and Vi are described in equations in the paper

1:

Set initial value to μL and TL

2:

for loop

3:

i0

4:

Count the number of elements in the set I as n

5:

for i =1 to n

6:

Calculate Ei,Vi

7:

Count the number of performed requests on the edge server storing i as ki

8:

Calculate pi by Ei,Vi and ki

9:

ii+1

10:

end

11:

Identify i with the smallest Pi

12:

Derive the context from i by analyzing it

13:

if i includes the required context at required time

14:

exit loop

15:

else if i includes the required context

16:

Remove i from I

17:

μLμ1

18:

TLti

19:

end loop

After calculating Pi of all iI, on the basis of existence probability and value of data, data with minimum Pi are selected. The corresponding server of the selected data receives the request of deriving context from data. To minimize computing cost, the method selects data of one unit at a time for a request and repeatedly selects other data until it finds the required data. That is, the total response time depends on the number of repetitions of selection and the number of concurrent processes on edge servers. Note that μL in Equation 2 and TL in Equation 3 are updated by finding data that have a different time from the required data but contain information about the target person.

SECTION IV.

Experiments

We evaluated the effect of the proposed method by simulation. We prepared the original simulator that simulates the movement of the person and the sensed data stored in edge servers virtually. Also, it generates the search requests and calculates the response time. The response time means the time from receiving a request to finding the required data. In the experiment, the response time was calculated on the basis of the number of repeated selections and concurrent requests on servers.

A. The model of target person and stored data

To compare and evaluate the applicability of the proposed method to various movement models, the simulator simulates the normal distribution movement shown in Equation 5 and the straight movement shown in Equation 6.

Model 1: Normal distribution movement is defined as follows.

μT+1N(σ,μT)=N(12πσ2e((μμT)22σ2))(5)
View SourceRight-click on figure for MathML and additional features.

Model 2: Straight movement is defined as follows.

μT+1=μT+1(6)
View SourceRight-click on figure for MathML and additional features.

μτ denotes the target position at time Tj,j{0,1,2,3,4,5}.j is the successive time steps, and the value of Tj is the integer corresponding to j, e.g., T0 as 0,T1 as 1, and T5 as 5. The simulator sets a person at a random position μ0 at time T0. Then the person changes the position by following each equation. In the normal distribution movement, σ was set to 1.0. In the straight movement, the person moves to the area of an adjacent edge server at every time step. Edge servers are set at one distance interval, which is the range of areas, and they store data for T1 to T5. That is, each edge server holds five data.

In this simulator, instead of actual sensed data, stored data in edge servers are simple data that indicate only the containing context for the target person or not. A server in the same area as the target person stores data containing context for the target person at a time. For evaluation, the simulator did not use this context to select the data. The context of data appeared after data were selected. The simulator used the μ0 as initial μL and T0 as initial TL. Under the above condition, we generated data of 1000 independent target people.

B. Configuration of search requests

The simulator processed 1000 search requests, each corresponding to a different target. All search requests are for data containing the target person at T5. The simulator sequentially selected data for 1000 search requests and then started checking the context of selected data. The simulator repeatedly selected other data for the request when required data were not found. Finally, the simulator calculated the total response time of all edge servers. In the simulation, the default response time of all edge servers was the same: Tci was set to 10 seconds. The response time was calculated in proportional to the number of simultaneous requests on the edge server. Both α and σD, which are the coefficients for obtaining the Ei, were set to 0.5.

Note that although cameras and edge servers are actually arranged in a two-dimensional space (horizontal and vertical), this simulator is a one-dimensional space (horizontal) for simple modeling. Data are actually generated in the two-dimensional space. That is, as time elapses from the last finding, the number of edge servers to be searched increases two-dimensionally. To reflect this, the compensated response time (Tci) is calculated as follows.

Tci=Tcili(7)
View SourceRight-click on figure for MathML and additional features. li is the weight to reflect the expansion of a search range in two-dimensional space and is defined as follows.
li=2(TiTL)1(8)
View SourceRight-click on figure for MathML and additional features.

The farther from the time of the last finding, the more li increases.

C. Experimental conditions and results

To evaluate the effect of the proposed method on reducing search response time, we measured the total processing times when using the proposed method. As the baseline, we also measured the total response time when randomly selecting data only from the data at T5. The random selection method did not mention the past time data because the method did not consider the existence probability of context. Moreover, for comparison, we measured the total response time when selecting data in consideration of load balance. The method selected data only from the data at T5 stored by the edge server with the fewest requests executed. If there were multiple edge servers with the fewest request executions, the data and edge servers were randomly selected from them. To minimize the computing cost, the methods did not select multiple data in parallel for one request but selected one data at a time. They repeatedly selected other data one by one until they found the required data.

In the experiment, the number of edge servers was set to 10, 50, and 100. The number of edge servers corresponds to the size of the total service area. Figure 3 shows the total response time of 1000 requests. In the experiment, the coefficient of data value ω was set to 0.001. In all experimental conditions, the proposed method reduced response time compared with the other methods. The proposed method reduced response time most under the condition of normal distribution movement and 100 edge servers, where the reduction rate was 95% compared with random selection. In contrast, simple load balancing only reduced response time by 6% under the same conditions. Looking at the cost reduction effect for each number of edge servers, the greater the number of edge servers, the greater the reduction effect. We analyzed the reason for this. In all methods, as the number of edge servers increased, the number of concurrent executed processes on one edge server decreased. However, in the case of random selection and simple load balancing, an increase in the number of edge servers reduces the probability of selecting the required data and increases the number of repeated selections. In the case of random selection and simple load balancing, these two effects that depend on the number of edge servers canceled each other out. In contrast, in the case of the proposed method, the number of data selections remained small due to existence probability even if the number of edge servers was large. Therefore, the larger the number of edge servers, the larger the difference in response time between the proposed method and other methods.

Next, looking at the difference between the movement models, the reduction rate is higher in the normal distribution movement in any number of edge servers. This is because the closer the existence probability model and simulation model are, the more data containing the target’s context can be selected. However, in the case of 50 and 100 edge servers, the difference was a 1% reduction rate, which was slight.

Further, we evaluated the relationship between response time and ω. Figure 4 shows the response time with each value of ω. The response time is normalized on the basis of the shortest processing time when ω was 0.001 in each experimental condition. As ω becomes larger, the data at the latest time are preferentially selected. Depending on the value of ω, the response time varies by 1.6 times in the normal distribution movement case but by 7.7 times in the straight movement case. Moreover, in the straight movement case, the response time increases as ω increases until it is 0.3 and decreases as ω increases thereafter. We explain the reason for this result. In the straight movement, the target’s position moved to a distant place as time elapsed. This was contrary to the estimation of the proposed method using the normal distribution. Therefore, the correct data are unlikely to be selected in the proposed method. As a result, fewer incorrect data were selected when the search was performed in order from the oldest data. In particular, when ω was 0.3, the past data without the target’s context were selected the most. When ω is larger than 0.3, the total response time decreases because the amount of past data selected without the target’s context decreases. The result means that the value of ω is more sensitive when the estimated model of existence probability of data deviates from the actual one.

Fig. 3. - Total response time using random selection, simple load balancing, and proposed method in the case of (a) normal distribution movement and (b) straight movement.
Fig. 3.

Total response time using random selection, simple load balancing, and proposed method in the case of (a) normal distribution movement and (b) straight movement.

Fig. 4. - Total response time in proposed method with each value of $\omega$.
Fig. 4.

Total response time in proposed method with each value of ω.

Finally, note that the processing time required for selecting data by the proposed method was short. The whole calculation was completed in a few seconds on one computer. The total response time was not affected. From the above, the experiments demonstrated the effectiveness of the proposed method for a large network.

SECTION V.

Discussion

This paper has proposed a context-based search method on edge computing and shown its effectiveness. We also mention the number of edge servers in real networks. For example, Japan consists of 47 prefectures and 1718 municipalities. If each local government has at least one or more area network and corresponding edge servers, the number of computers will be ten times higher than in the simulation, so the reduction effect of response time will be further increased in real networks.

Furthermore, some challenges will increase the accuracy and efficiency of the proposed context-based search. First, as is evident from experiments, the accuracy of estimating the existence probability is important for further reducing the response time. Various models for predicting human movement have been proposed. For example, the hidden Markov model is useful for predicting people’s future locations [19]. Adopting such models will increase accuracy of existence probability.

The second challenge is utilizing high layer characteristics of data other than spatiotemporal continuous data. Existence probability of context on edge servers can be derived by the number of appearances of the context during a certain period of time. Also, existence probability of context from multiple search results can determine the existence probability of other related context. For example, when searching for and finding a specific person, the result is also useful for searching for another person, because it suggests the existence probability of someone who has a relationship with him/her. Defining the ontology such as hierarchy and correlation of context enables the method to derive the existence probability of various data.

Finally, another challenge is to expand the method to take into account heterogeneous networks. Real networks are heterogeneous in terms of performance, cost, and network topology, so optimal data selection considering these factors is more complicated. Context-based search will be expanded and implemented more efficiently by adopting optimization calculation on the basis of not only the existence probability and value of context but also performance and costs of edge servers and networks.

SECTION VI.

Conclusion

Context-based search is necessary to enable various services to share and utilize a large amount of sensed data generated from a large number of sensors in a wide area network. The paper proposed a novel context-based search method on edge computing using both low layer information such as computer load and high layer information such as the existence probability of context in data. In a simulation of a scenario of finding a moving person, the method reduced the response time of data search requests by 95% compared with selecting data without considering the existence probability of context. The reduction of response time lowers the search cost. In the future, we will extend the method to apply it to various types of data and heterogeneous networks.

References

References is not available for this document.
原文
為這個翻譯評分
你的意見回饋將用於協助改善 Google 翻譯品質