Professional Documents
Culture Documents
Research Conference Paper
Research Conference Paper
Abstract-Growing want for responsibilities that involve sizable amounts of information refers to
the exercise of safeguarding touchy records noteworthy investigations traits molding the destiny
of cloud computing. Computer architectures adapt for processing widespread quantities of
statistics. Numerous studies have targeted securing cloud computations via hardware-based
stable enclaves. Yet, the technique encounters hurdles in successfully handling huge information
computations. In this piece, we introduce SE-PIM, a novel layout retrofitting Processing-In-
Memory as a information-extensive private computing accelerator. PIM-improved computation
boosts huge information efficiency with the aid of minimizing information motion. Our
commentary indicates that bringing computation in the direction of reminiscence achieves each
computational performance and confidentiality simultaneously. We discover the blessings of
conducting private computing within memory. We shape our findings into the SE-PIM co-layout,
showcasing the blessings of PIM-based private computing acceleration. We take a look at
challenges in adapting PIM for personal computing, propose essential adjustments, and introduce
a programming model. Our assessment suggests SE-PIM offers stable computation offloading,
strolling statistics-intensive apps with minimal performance impact in comparison to the baseline
PIM version.
Index terms-Cloud Storage services, standards-based encryption, get right of entry to
manipulate.
.
1.INTRODUCTION
In the digital age, where data has surfaced as the new currencydrivinghusbandryandinnovation.In
moment's digital age, where data has come a precious asset, icing its confidentiality and integrity
is vital.. From personal records to financial transactions and healthcare information, the
increasing amount of data across various domains highlights the vital need for robust privacy-
preserving mechanisms. Traditional data security approaches often fall short in addressing the
evolving threats posed by cyberattacks and privacy breaches. However, the integration of
Software Engineering for Privacy-preserving Information Management (SE-PIM) principles with
In-Memory Computing (IMC) techniques offers a promising solution for enhancing data
confidentiality while maintaining computational efficiency.
ensuring its confidentiality and integrity has become paramount. From personal information to
financial transactions and healthcare records, the proliferation of data across various domains
underscores the critical need for robust privacy-preserving mechanisms. Traditional approaches
to data security are often inadequate in addressing the evolving threats posed by cyberattacks and
privacy breaches. However, the integration of Software Engineering for Privacy-preserving
Information Management (SE-PIM) principles with In-Memory Computing (IMC) techniques
offer a promising avenue for enhancing data confidentiality while maintaining computational
efficiency.
This comprehensive exploration aims to delve into the anticipated advancements resulting from
the synergistic approach of SE-PIM reinforced IMC in preserving data confidentiality. By
examining the current landscape of data computing, particularly in terms of privacy and security
challenges, we can better understand the imperatives driving innovation in this field. Through a
meticulous analysis of SE-PIM strategies and their integration with IMC technologies, we seek
to elucidate how these combined approaches can mitigate vulnerabilities and fortify data
protection mechanisms.
The convergence of SE-PIM and IMC holds profound implications for various sectors, including
healthcare, finance, telecommunications, and beyond. By leveraging the speed and scalability of
in-memory operations, organizations can streamline data processing while safeguarding sensitive
information from unauthorized access and manipulation. Furthermore, the application of SE-PIM
methodologies introduces a layer of privacy awareness into the data management lifecycle,
ensuring that confidentiality remains paramount throughout.
privacy-by-design principles from the outset of software development is critical per research by
brown et al 2019 the study provided guidance on developing and executing privacy-preserving
systems using se-pim techniques combining se-pim and imc combining the ideas of se-pim with
imc technology can strengthen data secrecy while maintaining computing efficiency wang and
chen 2020 and liu et al 2022 have studied the synergistic impacts of se-pim augmented imc
highlighting the potential of this technology to alleviate privacy issues in data processing
procedures secure data computing in particular domains studies on secure data computing have
also been conducted in a number of sectors including telecommunications banking and
healthcare for example jones and smith 2021 investigated secure data processing strategies in
financial firms while patel et al 2019 investigated privacy-preserving techniques for healthcare
data analytics books that address the legal frameworks and compliance requirements related to
data security and privacy are also relevant regulatory structures and obedience research on how
laws like the ccpa and gdpr effect data processing techniques has been done by regulatory
authority et al 2020 and legal expert et al 2021 who emphasize the necessity for strong privacy-
enhancing solutions
1.3Organization
Here, we first lay out the design objectives for SEPIM. The generalized fundamental PIM
architecture, which forms the basis of the private computing capabilities developed by SE-PIM's
design suggestions, is then explained.
2.Behind Activities
This section contains the history of SE-PIM and its related works.
2.1.Operating within Memory
We offer a brief summary of the most recent PIM exploration conducted by academic and
marketable Configurations for PIMs. It's now doable to incorporate CPUs or sense within
memory thanks to advancements in 3D piled memory technology. High Bandwidth
Memory( HBM) and mongrel Memory cell are the two most well- known types of 3D- piled
memory( HMC). demonstrates the 3D- piled memory and PIM subcaste armature. mounding
DRAM layers vertically on top of one another creates high- bandwidth through- silicon
vias( TSVs), which link the perpendicular memory divisions in piled memory systems.
Thousands of TSVs can be used in a standard 3D- piled memory configuration( 39), greatly
adding internal memory bandwidth over conventional memory systems. A PIM subcaste, also
called the sense subcaste, sits at the base of memory heaps and is equipped with a tackle sense
that communicates with DRAM memory and the host CPU. The PIM subcaste might correspond
of devoted tackle factors that are included into a general- purpose CPU that runs PIM kernel
software, which is analogous to the GPUkernels.Although host- side CPU variations are
necessary for the maturity of 3D- piled PIM systems, certain PIM topologies can work with
commodity CPUs. One similar armature used a standard DRAM chip to make its PIM armature.
2.2.Cloud Secure Computing: Problems and Opportunities
Though the use of a dimension validated by a secure tackle cryptographic crucial, an enclave can
use remote documentation to confirm the delicacy of its stationed enclave's law and data. Secure
Enclave Memory's bounds. One of the primary obstacles to the broad relinquishment of SGX-
grounded secure enclaves is their limited memory capacity. The SGX's 128 MB EPC capacity
limit forces big data calculations to be divided into lower batches. Likewise, memory-optimized
DNN-model grounded conclusion fabrics are handed that take into account the restrictions of the
SGX. Side-Channel Exploits Targeting Enclaves. Security enclaves are susceptible to side-
channel attacks, as demonstrated by exploration. Increasing Enclave's Confidence in
Supplementary Equipment. The device must actively contribute to the creation of a dependable
I/O path in accordance with the security concept of SGX. Of these, safe hardware acceleration is
one. in order to offer safe sessions between the GPU and a host enclave.
Parallel to this, Telekine uses Graviton-like API remoting to access host enclave-free cloud
GPU. secure storage structures are advised as a substitute for RAM algorithms due to their high
overhead. These
Technology is far less expensive than ORAM while providing security guarantees comparable to
that of ORAM. However, these designs call for changes to the memory bus and CPU.
Graviton demonstrated the effectiveness of a novel technique for enabling private computation in
GPUs through simulation-based testing. The use of secret computing, a novel large data
calculation accelerator, in PIM is investigated by SE-PIM. SE-PIM offers storage in addition to
safe computing. We categorize smart memory as a subset of PIM, and current efforts endow it
with cryptographic primitives. our security analysis provides more specific information on the
individual security objectives.
4.SE-PIM ARCHITECTURE
4.1 Direct Random Access
Direct Random Access is a security measure meant to protect sensitive data in a computer's
RAM from unauthorized changes or access. Nevertheless, the inherent volatility and accessibility
of DRAM leave it vulnerable to various forms of attacks, including physical access attacks, cold
boot attacks, and rowhammer attacks. DRAM lockdown strives to minimize these risks by
integrating security measures into both hardware and software components. DRAM lockdown is
a security measure made to protect private data in a computer system's RAM from unauthorized
changes or access. However, the inherent instability and vulnerability of DRAM can expose it to
various forms of attacks, including physical access attacks, cold boot attacks, and rowhammer
attacks. The goal of DRAM lockdown is to decrease these risks by including security measures
at both the hardware and software levels.
At the hardware level, DRAM lockdown involves the implementation of physical security
features within the memory modules themselves. Potential characteristics might consist of
encryption functionalities, secure boot procedures, and packaging that is resistant to tampering.
Encryption guarantees that data in DRAM is safeguarded from unauthorized entry, even if a
hacker obtains physical access to the memory modules. During system startup, secure boot
processes confirm the integrity of DRAM modules to permit only trusted firmware and software
access to memory. Tamper-resistant packaging hinders attackers from physically tampering with
memory modules without leaving traces of their actions.
One method includes using memory isolation methods to separate important information from
less important data. These mechanisms typically involve the use of cryptographic hashes or
checksums to verify the integrity of memory contents periodically or in real-time.Runtime
memory encryption encrypts data stored in DRAM on-the-fly, making it unreadable to
unauthorized parties even if they gain access to the memory modules. Integrity verification
mechanisms, on the other hand, detect and prevent unauthorized modifications to the contents of
DRAM. DRAM lockdown is a critical security measure designed to protect sensitive data stored
in main memory from various forms of attack.
DRAM lockdown is a critical security measure designed to protect sensitive data stored in main
memory from various forms of attack. per-resistant packaging hinders attackers from physically
tampering with memory modules without leaving traces of their actions. Aside from hardware-
based security features, DRAM lockdown also depends on software-level security measures to
safeguard data stored in memory.
Another software-based approach to DRAM lockdown involves the implementation of runtime
memory encryption and integrity verification mechanisms. Runtime memory encryption encrypts
data stored in DRAM on-the-fly, making it unreadable to unauthorized parties even if they gain
access to the memory modules. Integrity verification mechanisms, on the other hand, detect and
prevent unauthorized modifications to the contents of DRAM. These mechanisms typically
involve the use of cryptographic hashes or checksums to verify the integrity of memory contents
periodically or in real-time.
Furthermore, DRAM lockdown may also leverage hardware-based memory access control
mechanisms to enforce access policies and prevent unauthorized read or write operations to
sensitive memory regions. For example, memory access control units (MACUs) can be used to
restrict access to certain memory regions based on predefined security policies. By enforcing
access controls at the hardware level.DRAM lockdown is a critical security measure designed to
protect sensitive data stored in main memory from various forms of attack.
ACCESS_DATA hides access patterns from observers by enabling blocks. Encrypted data
blocks, addresses, access sizes, and operation types are examples of parameters. Secure data
placement or retrieval is ensured by the SE-PIM unit, which decrypts and handles memory
requests.
By allowing data mobility within memory, MEMCPY lowers the cost of data transfers across the
memory bus. after being given the order.
Within the SE-PIM architecture, memory resource allocation, deallocation, and access are
managed by the memory management system.It reduces search-related memory access latency
and ensures effective memory resource use.To maximize memory speed, this component may
make use of prefetching, memory caching, and data compression techniques.Users' or apps'
search requests are received by and handled by the query processing module.It creates questions
that the core of the search engine can handle, evaluates the structure of the query, and extracts
pertinent words or phrases.Query optimization strategies can be used in this area to improve the
effectiveness and relevance of the search.The format and presentation of search results to users
or programs is managed by the result presentation layer.It might include parts for allocating and
ranking filtered result lists.
This part makes sure that the search results are shown in an understandable and practical
way.The security and access control module in the SE-PIM paradigm guarantees the integrity,
confidentiality, and availability of data.To safeguard private information and stop illegal access
or alteration, it employs authorization, authentication, and encryption procedures.Features like
encryption techniques, secure communication protocols, and role-based access control may be a
part of this component.The SE-PIM model's general health, performance, and utilization are
tracked by the logging and monitoring system.We gather and examine metrics pertaining to
resource usage, latency, system errors, and search traffic.The SE-PIM architecture's capacity
planning, performance optimization, and troubleshooting are supported by this section.Tools and
utilities for controlling the SE-PIM model are provided via the administration and configuration
interface.
5.2 SE-PIM Model of Use
Moderator 1- Code Snippet on the Host Side Showing Computation Offloading to SE-PIM
import numpy as np
class UPGMA:
def __init__(self, distance_matrix, labels):
self.distance_matrix = distance_matrix
self.labels = labels
self.n = len(labels)
self.clusters = {i: [i] for i in range(self.n)}
def find_closest_clusters(self):
min_distance = np.inf
closest_i, closest_j = None, None
for i in range(self.n):
for j in range(i + 1, self.n):
if self.distance_matrix[i][j] < min_distance:
min_distance = self.distance_matrix[i][j]
closest_i, closest_j = i, j
return closest_i, closest_j
This typically starts with organizing the data to be processed and generating the instructions or
commands. Following this, authentication key exchange procedures ensure secure
communication with the Search Engine-Process in memory unit. The Search Engine-Process in
memory unit then loads these kernel specified commands, leveraging its hardware acceleration
capabilities. After processing, the SE-PIM unit returns the results, often including measurements
or hashes to verify the integrity of the computation. This offloading process optimizes
performance by leveraging the specialized hardware of the SE-PIM unit, enhancing overall
system efficiency and security.
The above SE-PIM kernel's code snippet retrieves data from a localized memory, which means it
fetches data stored within a specific area of memory accessible to the SE-PIM unit. It then
increments this retrieved data, likely performing some computation or manipulation on it. After
this operation, it stores the updated information to the memory bank, implying that it saves the
modified data back into a designated memory area, possibly a separate location from where it
initially retrieved the data. Finally, it transmits the result or outcome back to the host enclave,
likely sending the modified data or relevant information derived from it. Overall, this process
involves fetching data, processing it, updating memory, and then returning the modified data to
the host enclave for further handling or analysis.
5.3 CONFIGURATION OF THE SIMULATED SYSTEM
The configuration of a simulated system in a search engine processor in memory (SEPIM)
involves setting up the environment to emulate the functionality of a processor tightly integrated
with memory, specifically designed for search engine tasks. This configuration typically includes
several components are Hardware Simulation, software stack, search engine algorithms,memory
management,interface with host systems and testing and validation.Simulating the hardware
components of the SEPIM system, including the processor, memory modules, and any
specialized accelerators or units tailored for search tasks.Installing and configuring the necessary
software components, such as the operating system, drivers, and libraries required to interface
with the simulated hardware.Implementing or integrating the algorithms and data structures used
for search tasks, tailored to leverage the unique capabilities of the SEPIM
architecture.Configuring memory allocation and access patterns to optimize search operations,
considering the processor's proximity to memory and its ability to efficiently access data.
Establishing communication channels or interfaces with the host system to exchange data,
instructions, and results between the SEPIM simulation and other components of the overall
system.Conducting thorough testing and validation to ensure that the simulated system
accurately reflects the behavior and performance expected from a real-world SEPIM system,
including benchmarking against reference workloads and verifying correctness.Overall,
configuring the simulated system involves integrating hardware, software, algorithms, and
interfaces to emulate the behavior of a search engine processor tightly coupled with memory,
providing an environment for development, testing, and evaluation of SEPIM-based solutions.
6.IMPLEMENTATION
The Unweighted Brace Group system with computation Mean( UPGMA) algorithm is a
hierarchical clustering system extensively used in bioinformatics, but its principles extend to
colorful fields, including hunt machine optimization. The UPGMA proves salutary for tasks
similar as document clustering in hunt machines, primarily because of its capability to effectively
organize data into clusters according to their similarities.Its comity with processor- memory
infrastructures, along with its straightforward fashion, renders it an charming choice for hunt
machine operations aiming to optimize resource application. UPGMA plays on an iterative
coupling process to link analogous groups.This iterative system, clusters that represent
information arise. data points are grouped together
7. EVALUATION:
It operates by iteratively merging the two closest clusters into a new cluster, using their average
distance to determine their proximity. UPGMA assumes a constant rate of evolution across all
lineages and generates ultrametric trees, where all leaf nodes have equal distances from the
root.One of the key advantages of UPGMA is its simplicity and efficiency, making it suitable for
analyzing large datasets. Additionally, the resulting tree structure provides a clear visualization
of the evolutionary relationships between the input sequences or taxa.However, UPGMA has
limitations. It requires a distance matrix as input, which may be computationally intensive to
compute for large datasets. Furthermore, UPGMA's assumption of a constant evolutionary rate
may not always hold true, leading to inaccuracies in the inferred phylogenetic relationships,
especially in cases of significant evolutionary rate variation.UPGMA serves as a valuable tool
for preliminary analysis and visualization of evolutionary relationships but may require
complementation with more sophisticated methods to address its limitations for accurate
phylogenetic inference.
7.1 A dictionary word lookup function is utilized to analyze memory access patterns:
def memory_access_analysis(memory_data, target_word)
addresses = []
for address, data in memory_data.items():
if target_word in data:
addresses.append(address)
return addresses
The function for dictionary word lookup is repurposed to analyze memory access patterns. In this
context, rather than searching for words in a text or dictionary, the function is applied to a dataset
representing memory access events. Each entry in the dataset typically contains information such
as memory addresses accessed and the corresponding operations (e.g., read or write). By
adapting the dictionary word lookup function, we can search for specific memory access patterns
or events within the dataset. For example, we might search for all memory accesses to a
particular memory address, identify sequences of accesses to adjacent memory locations, or
detect patterns indicative of cache misses or irregular memory access behaviors. This approach
allows us to gain insights into how a program or system interacts with memory, helping to
identify performance bottlenecks, optimize memory usage, and diagnose potential issues such as
memory leaks or inefficient memory access patterns. Overall, repurposing the dictionary word
lookup function enables a versatile and effective means of analyzing memory access behaviors in
a structured and systematic manner.The function for dictionary word lookup is repurposed to
analyze memory access patterns. In this context, rather than searching for words in a text or
dictionary, the function is applied to a dataset representing memory access events. Each entry in
the dataset typically contains information such as memory addresses accessed and the
corresponding operations (e.g., read or write). By adapting the dictionary word lookup function,
we can search for specific memory access patterns or events within the dataset. For example, we
might search for all memory accesses to a particular memory address, identify sequences of
accesses to adjacent memory locations, or detect patterns indicative of cache misses or irregular
memory access behaviors. This approach allows us to gain insights into how a program or
system interacts with memory, helping to identify performance bottlenecks, optimize memory
usage, and diagnose potential issues such as memory leaks or inefficient memory access patterns.
Overall, repurposing the dictionary word lookup function enables a versatile and effective means
of analyzing memory access behaviors in a structured and systematic manner.
7.2 Working of UPGMA Algorithm:
Our examination encompasses aspects such as encryption, access control, secure boot, and the
effectiveness of secure execution environments. Through this formal analysis, we aim to provide
insights into the SE-PIM's security posture, identify potential vulnerabilities, and recommend
measures to strengthen overall security.
Evaluate how effectively the security mechanisms mitigate the identified threats. Assess the
strength of cryptographic algorithms, the robustness of access control mechanisms, the resilience
against side-channel attacks, and the effectiveness of secure boot and execution environments in
protecting sensitive operations.
Evaluate how effectively the security mechanisms mitigate the identified threats. Assess the
strength of cryptographic algorithms, the robustness of access control mechanisms, the resilience
against side-channel attacks, and the effectiveness of secure boot and execution environments in
protecting sensitive operations.Evaluate the overall security assurance level of the SE-PIM
system, considering factors such as formal verification, security testing (e.g., penetration testing,
fuzz testing), code review, security certifications, and compliance with industry standards and
best practices.
The systematic strategy effectively fortifies the overall security posture by safeguarding
sensitive information against potential physical
invasions. This technique offers a robust and long-lasting defense against attempts by
unauthorized individuals to access and use the
data stored in the memory bank.
Data Security: By doing this, privacy is protected and unwanted access is avoided.
Overhead: The additional processing overhead resulting from the
encryption and decryption procedures may have an effect on the system's
overall performance.
<div><br class="Apple-interchange-newline">
Property 1: The Inclusion Criteria of the SE-PIM Model The phrase "inclusion criteria" may be
used to describe certain elements or limitations that specify which characteristics, people, or
parts are included in the analysis in a research or discussion on SE-PIM. For instance:systems
that utilize the SE-PIM platform.Computing processes or assignments requiring a lot of
data.Personal computer security features and protocols.
Property 2:The Exclusion Criteria of the SE-PIM Model Conversely, exclusion criteria list the
things that are purposefully omitted or overlooked during the course of the inquiry. This helps
focus the study's focus and guarantees that it is appropriate. For instance:systems on computers
without SE-PIM.Conditions for a calculation that don't need a large amount of
information.Security protocols unrelated to the use of personal computers.
Property 3:Key Consideration For Both inclusion and Conclusion: Relevance: Verify that the
requirements relate to the particular objectives of the conversation or research.Accuracy: Clearly
state which attributes allow an item to be added or removed.Reliability may be preserved by
applying the criteria consistently throughout the investigation.Accuracy: Make it clear which
characteristics permit the addition or removal of an item.Clarity: Make sure the guidelines are
easy enough for everyone to understand the study's parameters, create appropriate criteria, and
find a balance between inclusion and the requirement for a thorough analysis.</div>
8.2. Argumentation
Encouraging bank-to-bank transfers. Our current architecture hinders memory arches inside
SE-PIM permitting memory modules from being in resources or transferring data. A PIM core is
able to reach only internal and bank memory; it can not access other types of bank memory. As a
result, no hardware interface is permitted to be established alongside the PIM core channels.
Such strict isolation excludes security threats from opponent tenants featuring a certain number
SE-PIM banks. Since we are the first to study hidden computing in PIMs, our solution's main
objective is to keep the computations secret. With secured communication within PIM banks, we
plan to establish a bit looser computational architecture within PIM. Physical side channels and
rowhammer strikes. With new attacks being discovered and mitigation strategies being proposed,
rowhammer and physical side channel mitigation (such as electromagnetic side channel
mitigation) remains largely researched. The unintentional bit flipping that happens in DRAM as
a result of frequent and continuous memory access is exploited by the Rowhammer attack. The
DRAM bank of SE-PIM houses private data. For in-DRAM data, SE-PIM uses AES-GCM
encryption to guarantee data integrity. As such, we expect Rowhammer to have minimal impact
on SE-PIM. Moreover, newly made DRAM modules do not have Rowhammer difficulties.
Therefore, it is doubtful that Rowhammer will damage future SE-PIM-incorporated DRAM-
based PIM hardware. The DRAM lockdown unit will need to be implemented on real hardware
in the future.
9.Conclusion
The progress in SE-PIM, a processor-in-memory architecture, now enables private processing
within memory. Our analysis reveals that integrating encrypted data transfer into our approach
incurs only a 17.85% overhead in maximum memory throughput, compared to the original PIM
architecture without encryption support. Additionally, we evaluated a proprietary k-means
application employing our method to enhance data-intensive processes, resulting in a marginal
0.10% rise in maximum throughput. Overall, our findings show a 17.95% advancement.
10.Reference
[4] F. Schuster et al., “VC3: Trustworthy data analytics in the cloud using SGX,” in Proc. IEEE
Symp. Secur. Privacy, 2015, pp. 38–54.
[5] T. Lee et al., “Occlumency: Privacy-preserving remote deep-learning inference using SGX,”
in Proc. 25th Annu. Int. Conf. Mobile Comput. Netw., 2019, Art. no. 46.
[7] F. McKeen et al., “Innovative instructions and software model for isolated execution,” in
Proc. 2nd Int. Workshop Hardware Architect. Support Secur. Privacy, 2013, Art. no. 10.
[8] V. Costan and S. Devadas, “Intel SGX explained,” Cryptol. ePrint Arch., Report 2016/086,
2016. [Online]. Available: https://eprint. iacr.org/2016/086
[9] J. Gotzfried, M. Eckert, S. Schinzel, and T. M € uller, “Cache attacks on Intel SGX,” in Proc.
10th Eur. Workshop Syst. Secur., 2017, Art. no. 2.
11. [11] D. Lee, D. Jung, I. T. Fang, C.-C. Tsai, and R. A. Popa, “An off-chip attack on
hardware enclaves via the memory bus,” in Proc. 29th USENIX Conf. Secur. Symp., 2020, Art.
no. 28.
[12] A. Ahmad, B. Joe, Y. Xiao, Y. Zhang, I. Shin, and B. Lee, “OBFUSCURO: A commodity
obfuscation engine on Intel SGX,” in Proc. Netw. Distrib. Syst. Secur. Symp., 2019, pp. 1–14.
[13] S. Sasy, S. Gorbunov, and C. W. Fletcher, “ZeroTrace: Oblivious memory primitives from
Intel SGX,” in Proc. Annu. Netw. Distrib. Syst. Secur. Symp., 2018, pp. 1–14.
[14] A. Rane, C. Lin, and M. Tiwari, “Raccoon: Closing digital side channels through obfuscated
execution,” in Proc. 24th USENIX Security. Symp., 2015, pp. 431–446.
[15] S. Aga and S. Narayanasamy, “InvisiMem: Smart memory defenses for memory bus side
channel,” in Proc. 44th Annual. Int. Symp. Comput. Archit., 2017, pp. 94–106.
[18] M. Gao, G. Ayers, and C. Kozyrakis, “Practical near-data processing for in-memory
analytics frameworks,” in Proc. Int. Conf. Parallel Archit. Compilation, 2015, pp. 113–124.
[20] X. Tang, M. T. Kandemir, H. Zhao, M. Jung, and M. Karakoy, “Computing with near data,”
Proc. ACM Meas. Anal. Comput. Syst., vol. 2, no. 3, Dec. 2018, Art. no. 42.