- Expertini Resume Scoring: See how well your CV/Résumé matches this job: PhD Position F/M Cost and Performance Efficient Caching for Massively Distributed Systems.
Urgent! PhD Position F/M Cost and Performance-Efficient Caching for Massively Distributed Systems Job | INRIA
Contexte et atouts du poste
Financial and working environment.
This PhD will be in the context of IPCEI-CIS (Important Project of Common European Interest – Next Generation Cloud Infrastructure and Services) DXP (Data Exchange Platform) project involving Amadeus and three Inria research teams (COAST, CEDAR and MAGELLAN).
This project aims to design and develop an open-source management solution for a federated and distributed data exchange platform (DXP), operating in an open, scalable, and massively distributed environment (cloud-edge continuum).
The PhD student will be recruited and hosted at the Inria Center at Rennes University; and the work will be carried out within the MAGELLAN team in collaboration with other partners.
The PhD student will be supervised by:
Mission confiée
Context
The ever-growing number of services and Internet of Things (IoT) devices has resulted in data being distributed across different locations (regions and countries).
Additionally, data exhibits different usage patterns, including cold data (written once and never read), stream data (produced once and consumed by many), and hot data (written once and consumed by many).
Furthermore, these data types have different performance and dependability requirements (e.g., low latency for data streams).
Data caching is a widely used technique that improves application performance by storing data on high-speed devices close to end users.
Most research on data caching has focused on the benefits of different data placement strategies (i.e., which data to place in the cache), data movement, cache partitioning, cache eviction [1, 2, 3, 4, 5, 6, 7, 8], and on realizing cost-efficient data redundancy techniques in caching systems [9].
However, few efforts have studied data management when caches are distributed across different platforms (Edge-to-Cloud), utilize heterogeneous storage devices (in terms of performance and cost), and serve multiple, diverse applications, including traditional data services, serverless workflows and data streaming.
References:
[1] Asit Dan and Don Towsley.
1990.
An Approximate Analysis of the LRU and FIFO Buffer Replacement Schemes.
SIGMETRICS Perform.
Eval.
Rev.
18, 1 (apr 1990), 143–152.
[2] Marek Chrobak and John Noga.
1999.
LRU is better than FIFO.
Algorithmica 23 (02 1999), 180–185.
[3] Blankstein, Aaron, Siddhartha Sen, and Michael J.
Freedman.
“Hyperbolic caching: Flexible caching for web applications.” 2017 USENIX Annual Technical Conference (USENIX ATC 17).
2017.
[4] Cristian Ungureanu, Biplob Debnath, Stephen Rago, and Akshat Aranya.
2013.
TBF: A memory-efficient replacement policy for flash- based caches.
In 2013 IEEE 29th International Conference on Data Engineering (ICDE).
1117–1128.
[5] Orcun Yildiz, Amelie Chi Zhou, Shadi Ibrahim.
2018.
Improving the Effectiveness of Burst Buffers for Big Data Processing in HPC Systems with Eley.
Future Generation Computer Systems, Volume 86, 2018, Pages 308-318, ISSN 0167-739X, style=font-weight: 400;>.
[6] G.
Aupy, O.
Beaumont and L.
Eyraud-Dubois, Sizing and Partitioning Strategies for Burst-Buffers to Reduce IO Contention, 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS), Rio de Janeiro, Brazil, 2019,
[7] ZHANG, Yazhuo, YANG, Juncheng, YUE, Yao, et al.
{SIEVE} is simpler than {LRU}: an efficient {Turn-Key} eviction algorithm for web caches.
In : 21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24).
2024.
p.
1229-1246.
[8] Juncheng Yang, Ziming Mao, Yao Yue, and K.
V.
Rashmi.
GL-Cache: Group-level learning for efficient and high-performance caching.
FAST’23, pages 115–134, 2023.
[9] RASHMI, K.
V., CHOWDHURY, Mosharaf, KOSAIAN, Jack, et al.{EC-Cache}:{Load-Balanced},{Low-Latency} cluster caching with online erasure coding.
In : 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16).
2016.
p.
401-417.
Principales activités
The goal is to design cost- and performance-efficient distributed smart caching middleware that facilitates data exchange within and across data providers (and producers) and consumers (users) while considering the temperature of the data, its frequency, and the heterogeneity and dynamics of the infrastructure.
Specifically, we aim to address these research questions:
Research Methodology: This research addresses the challenges of data management in distributed, heterogeneous caches by designing and implementing novel middleware, models, algorithms, and a framework to answer the above questions.
All solutions will be validated through simulations or on a real distributed infrastructure, such as Grid’5000 and Amazon Web Services.
Compétences
Avantages
Rémunération
monthly gross salary 2200 euros
✨ Smart • Intelligent • Private • Secure
Practice for Any Interview Q&A (AI Enabled)
Predict interview Q&A (AI Supported)
Mock interview trainer (AI Supported)
Ace behavioral interviews (AI Powered)
Record interview questions (Confidential)
Master your interviews
Track your answers (Confidential)
Schedule your applications (Confidential)
Create perfect cover letters (AI Supported)
Analyze your resume (NLP Supported)
ATS compatibility check (AI Supported)
Optimize your applications (AI Supported)
O*NET Supported
O*NET Supported
O*NET Supported
O*NET Supported
O*NET Supported
European Union Recommended
Institution Recommended
Institution Recommended
Researcher Recommended
IT Savvy Recommended
Trades Recommended
O*NET Supported
Artist Recommended
Researchers Recommended
Create your account
Access your account
Create your professional profile
Preview your profile
Your saved opportunities
Reviews you've given
Companies you follow
Discover employers
O*NET Supported
Common questions answered
Help for job seekers
How matching works
Customized job suggestions
Fast application process
Manage alert settings
Understanding alerts
How we match resumes
Professional branding guide
Increase your visibility
Get verified status
Learn about our AI
How ATS ranks you
AI-powered matching
Join thousands of professionals who've advanced their careers with our platform
Unlock Your PhD Position Potential: Insight & Career Growth Guide
Real-time PhD Position Jobs Trends in Rennes, France (Graphical Representation)
Explore profound insights with Expertini's real-time, in-depth analysis, showcased through the graph below. This graph displays the job market trends for PhD Position in Rennes, France using a bar chart to represent the number of jobs available and a trend line to illustrate the trend over time. Specifically, the graph shows 150 jobs in France and 5 jobs in Rennes. This comprehensive analysis highlights market share and opportunities for professionals in PhD Position roles. These dynamic trends provide a better understanding of the job market landscape in these regions.
Great news! INRIA is currently hiring and seeking a PhD Position F/M Cost and Performance Efficient Caching for Massively Distributed Systems to join their team. Feel free to download the job details.
Wait no longer! Are you also interested in exploring similar jobs? Search now: PhD Position F/M Cost and Performance Efficient Caching for Massively Distributed Systems Jobs Rennes.
An organization's rules and standards set how people should be treated in the office and how different situations should be handled. The work culture at INRIA adheres to the cultural norms as outlined by Expertini.
The fundamental ethical values are:The average salary range for a PhD Position F/M Cost and Performance Efficient Caching for Massively Distributed Systems Jobs France varies, but the pay scale is rated "Standard" in Rennes. Salary levels may vary depending on your industry, experience, and skills. It's essential to research and negotiate effectively. We advise reading the full job specification before proceeding with the application to understand the salary package.
Key qualifications for PhD Position F/M Cost and Performance Efficient Caching for Massively Distributed Systems typically include Computer Occupations and a list of qualifications and expertise as mentioned in the job specification. Be sure to check the specific job listing for detailed requirements and qualifications.
To improve your chances of getting hired for PhD Position F/M Cost and Performance Efficient Caching for Massively Distributed Systems, consider enhancing your skills. Check your CV/Résumé Score with our free Resume Scoring Tool. We have an in-built Resume Scoring tool that gives you the matching score for each job based on your CV/Résumé once it is uploaded. This can help you align your CV/Résumé according to the job requirements and enhance your skills if needed.
Here are some tips to help you prepare for and ace your job interview:
Before the Interview:To prepare for your PhD Position F/M Cost and Performance Efficient Caching for Massively Distributed Systems interview at INRIA, research the company, understand the job requirements, and practice common interview questions.
Highlight your leadership skills, achievements, and strategic thinking abilities. Be prepared to discuss your experience with HR, including your approach to meeting targets as a team player. Additionally, review the INRIA's products or services and be prepared to discuss how you can contribute to their success.
By following these tips, you can increase your chances of making a positive impression and landing the job!
Setting up job alerts for PhD Position F/M Cost and Performance Efficient Caching for Massively Distributed Systems is easy with Rennes | Expertini. Simply visit our job alerts page here, enter your preferred job title and location, and choose how often you want to receive notifications. You'll get the latest job openings sent directly to your email for FREE!