Difference between revisions of "GSOC2021DCE"

From Nsnam
Jump to: navigation, search
(Milestones and Deliverables)
(Weekly Report : Community Bonding and Week 1 Updates)
Line 28: Line 28:
 
*** More about this [https://docs.google.com/document/d/1o3xsukgDN9e4-q8n6KbLDX2c9fTxKIhRhimDsn4ivr8/edit#bookmark=id.ixtd8h2ia3hf here]
 
*** More about this [https://docs.google.com/document/d/1o3xsukgDN9e4-q8n6KbLDX2c9fTxKIhRhimDsn4ivr8/edit#bookmark=id.ixtd8h2ia3hf here]
 
** Investigate SMP architecture for LKL by querying the LKL developers list
 
** Investigate SMP architecture for LKL by querying the LKL developers list
*** More about this [https://docs.google.com/document/d/1o3xsukgDN9e4-q8n6KbLDX2c9fTxKIhRhimDsn4ivr8/edit#bookmark=id.9tht2o6n4312 here]
+
*** More about this [https://docs.google.com/document/d/1o3xsukgDN9e4-q8n6KbLDX2c9fTxKIhRhimDsn4ivr8/edit#bookmark=kix.h6e9x86v84qy here]
 
* '''Phase 2'''
 
* '''Phase 2'''
 
** To be determined based on Phase 1 results
 
** To be determined based on Phase 1 results
Line 34: Line 34:
 
** To be determined
 
** To be determined
  
= Pre-Coding Phase =
+
= Weekly Reports =
* I thought of testing the scheduler fix for LKL which I thought of previously.
+
* '''Community Bonding Period''' (May 17 - June 7)
* Idea : The accept network system call depends on the following working principle. For AF_INET sockets it executes the inet_accept() function call which further sets up a few locks and other stuff and ultimately controls goes to inet_wait_for_connect(). This function is responsible for putting the current thread to TASK_INTERRUPTIBLE state and issues a schedule_timeout() which sets up a mod_timer to call back the called thread after particular amount of jiffies have passed, meanwhile it also schedules other tasks in the run queue so that while the thread which issued accept call is waiting for the call to return back with a file descriptor of the newly opened socket, we can run other tasks. This is pretty straight forward on any operating system, assume having two application, one server and one client, so while the server waits on accept, the linux scheduler schedules the client to issue a connect() call and then switches back to the server where the accept() returns back with a file descriptor on which send and recv operations could be done(in the same way as above). But, this isn't that straight forward with DCE.
+
** Figured out possible Scheduling bottlenecks in LKL in blocking network calls. [https://docs.google.com/document/d/1o3xsukgDN9e4-q8n6KbLDX2c9fTxKIhRhimDsn4ivr8/edit#bookmark=kix.h6e9x86v84qy Section 4.5]
 
+
** Developed a beta docker port for ns-3-dce. [https://docs.google.com/document/d/1o3xsukgDN9e4-q8n6KbLDX2c9fTxKIhRhimDsn4ivr8/edit#bookmark=kix.qn5nf4lqt7uj Section 1.1]
* First and foremost, DCE doesn't work on actual threads, rather it uses fibres(LWP) and creates contexts and switches back and forth between them using TaskManager and Fibre Switch handlers. A few important functions are TaskManager::(TaskStart/TaskWakeup/TaskWait/Schedule/TaskSwitch...). Now, since we are loading LKL as a shared library the kernel scheduler inside the LKL space doesn't have access to the native threads we work with in DCE and thus when any system call calls the scheduler, LKL keeps scheduling the tasks(ksoftirqd,tasklets,workqueues,kernel_threads,IRQ handlers) created inside the LKL environment and never reaches the DCE threads keeping the DCE execution at a standstill on blocking operations such as accept/connect/send/recv. Apart from normal syscalls, socket functions like send register a namespace skb destructor which gets pushed to a work_queue for later execution when sk_free is called. workqueue ultimately depends on schedule to schedule the rescuer kernel threads created to dequeue all the tasks pushed to the workqueue.
+
** Discussed the bright side of porting the latest Linux kernel using the net-next-nuse architecture.
 
+
** Discussed possible regression tests for verifying both performance and results.
* Solution 1  : I though if the problem is, not reaching the DCE threads/fibres, then let's overwrite the LKL scheduler to first schedule the DCE threads first and then schedule the LKL tasks because LKL being a uniprocessor system(there goes another problem....i'll talk about that later in the report too) won't make much a difference(ummm...at least that's what seemed to me). So, I overwrote the schedule_timeout and schedule function to put the current DCE thread to sleep for given timeout and start the DCE scheduler with TaskManager::Schedule (private function, so had to create some public functions to run them), and then ran the scheduler inside LKL.
+
* '''Week 1''' (June 7 - June 14)
 
+
** Implemented the first Linux Kernel-5.12 port for DCE. [https://docs.google.com/document/d/1o3xsukgDN9e4-q8n6KbLDX2c9fTxKIhRhimDsn4ivr8/edit#bookmark=id.ixtd8h2ia3hf Section 5]
* Report 1 (did not pass) : I didn't notice the fact that some of the init calls under start_kernel defined in init/main.c required the scheduler. The problem with this is, we need to first call lkl_start_kernel(...) which calls the init functions and since we have already prioritized the DCE threads over the kernel threads, the kernel didn't initialize and simply jumped to the other DCE thread which started with creating the socket even when the network interface wasn't even initialized, and the funny part is the program didn't crash rather it got stuck again.
+
** Passed 5 tests/examples. [https://docs.google.com/document/d/1o3xsukgDN9e4-q8n6KbLDX2c9fTxKIhRhimDsn4ivr8/edit#bookmark=kix.pqkknl5dxvb2 Section 5.12]
 
+
** Initiate talks with LKL team to get reviews on a possible SMP port of LKL. [https://docs.google.com/document/d/1o3xsukgDN9e4-q8n6KbLDX2c9fTxKIhRhimDsn4ivr8/edit#bookmark=kix.h6e9x86v84qy Section 4.5]
* Solution 2 : I thought it was a problem with the scheduler and decided to ignore it and try to make the patch more specific by making scheduler calls within inet_wait_for_connect and avoiding changes in the universal LKL scheduler.
+
 
+
* Report 2 : Now, the init calls worked as they used to before and the system came to a point where after the accept call the system switched over to the other DCE task. But, the socket(...) function got stuck the same way as it did in Report 1.
+
 
+
* Why did this happen?
+
 
+
* Turns out it's not our fault, it's more like how LKL was designed to work. Remember, I spoke about LKL being a uniprocessor system. So when we initialize LKL with lkl_start_kernel it basically does a bunch of things including running init calls such as setting the thread and cpu mutex locks and semaphores. One of the most important things is lkl_cpu_change_owner(lkl_pthread_t) which operates on the cpu variable of type lkl_cpu. Now the lkl_cpu has a lot of fields, some of the important once :
+
** lock : a mutex lock to decide which host thread currently holds access
+
** owner : thread id of the cpu locker owner
+
** count : no of times the current thread acquired the lock
+
Now, the lkl_cpu_change_owner function checks if the count is not > 1 i.e the lock has not been provided to any other thread(including itself) for more than once and then changes the owner to itself keeping the count intact.
+
 
+
* So, when our socket(...) system call is run using the lkl syscall APIs, it enters the lkl_syscall(...) function which first requires it to get the cpu lock using lkl_cpu_get(...), now since for example Thread 1  ran lkl_start_kernel and the first syscall(namely accept) and acquired a lock already, and while it was processing, the scheduler decided to switch to another Thread 2, which issued another syscall, which in our case in socket(...) it will be waiting on a mutex for the first function to return so that it calls lkl_cpu_put() and the lock is freed for Thread 2 to acquire.
+
 
+
* Now, there's something else to keep a check to. Even if we ran lkl_cpu_change_owner for every syscall, it would be inconsistent and would obviously lead to failures but even if it doesn't fail, LKL would guarantee it would fail. LKL works on the idea that only one host thread acquires the lock and thus when we manually make the previous thread drop the lock using lkl_cpu_put, make the current thread as host using set_thread_flag(TIF_HOST_THREAD) and then change the cpu owner, the subsequent syscalls would fail the cpu.count keeps increasing which has an upper bound of 1.
+
 
+
* There's one legal way of changing cpu owners and that is through the default LKL scheduler function schedule(...) which makes context_switch(...) and calls the __switch_to(prev,next) function defined in arch/lkl/threads.c which legally changes the cpu owner from the prev to next, where prev and next are of type struct task_struct.
+
 
+
* I then decided to work simultaneously on the writing an SMP interface of LKL based on x86 and arm64 architectures and porting net-next-nuse to Linux-5.12.
+
 
+
* Opened a thread on LKL developers lists to get expert suggestions on design and implementation of the project integration in DCE : [https://www.freelists.org/post/linux-kernel-library/LKL-Integration-into-DCE LKL Integration into DCE]
+
 
+
* net-next-nuse port : Linux-4.5 Successful Build and Linux-5.12 defconfig generation report :
+
** I could get Linux-4.5 to work(tested it with DCE too) with some work. To be specific I had to do the following things :
+
*** The net-next-nuse Makefile under arch/lib uses objcopy and nm to rename some symbols exported by the kernel using the EXPORT_SYMBOL(...) call and maps it to rumpns_<symbol-name>. These symbols are usually rewritten by net-next-nuse as only a few to_keep files are compiled to lower the size of the shared library and also to get control over certain important parts of the kernel such as the paging service, scheduler, and proc_sysctl interface. Some functions in Linux-4.5 had been written in a different way as compared to Linux-4.4, for example rather than exporting put_page(responsible for handling compound or single pages) has been replaced with __put_page, so had to rewrite the implementation a bit.
+
*** It also introduces slib under memory management which is a memory page slab allocator making use of some dce/nuse routines too, much inspiration has been drawn from the slob allocator(not linked in the library).  
+
*** Had to modify some of the include/linux/slab.h to include the CONFIG_SLIB macro so that only the functions rewritten in mm/slib.c (custom written) could be included in the final build, as it has already enabled in Kconfig.
+
*** Setup the proc_sysctl interface based on the following commit
+
*** I think moving over to Linux-4.6 wouldn't be much tough too, though I haven't tried it yet.
+
** I was a little too curious to get my hands on the latest Linux kernel, so I tested Linux-5.12 too, and with the following changes mentioned below could get the first step(of the two steps) needed to generate the net-next-nuse library to complete successfully (i.e. make defconfig ARCH=lib)
+
*** Identified changes in defconfig process, so had to change the order of defconfig storage which should specifically be under arch/$(ARCH)/configs and also how defconfig were being called on the Kconfig linux build configuration file.
+
*** Kconfig scripting language has evolved quite a bit since net-next-nuse-4.4 release so had to modify the way Kconfig was using environment variables according to this commit by linux
+
*** A significant amount of work is yet to be done in order to make (make library ARCH=lib) work. I'm working on it too.
+
* I also worked on creating a Docker setup of ns-3-dce which would add the following advantages
+
** Lower down the current 12+ GB disk usage to 7GB (just 720 MB more than the previous release). This additional 1 GB is due to a patch I made that enables ns-3-dce to build using a custom Glibc-2.31 setup, so that features like vtable hijacking used by dce to redirect functions like fopen and fseek to custom dce defined implementations)  
+
** In the dce-docker-beta folder(my git repo) there is a bake directory and any changes made there gets synced with the docker internal bake directory, so users can both run simulation scripts, as well as make development changes to all the projects in the current bake installation.
+
* My beta docker ns-3-dce test image can be tested using the following commands(on any machine which can run docker and docker-compose) :
+
 
+
Note : Currently docker has to be run under sudo, but there are ways to avoid this (by creating a docker user group and adding the current user to the group) and a init script for this could be created for this which has to be run just once.
+
 
+
<code>
+
git clone https://github.com/ParthPratim/dce-docker-beta.git
+
 
+
cd dce-docker-beta
+
 
+
sudo docker-compose up -d  
+
 
+
sudo docker exec -w / -it ns-3-dce ./setup
+
 
+
sudo docker exec -w /bake/source/ns-3-dce -it ns-3-dce ./waf --run dce-linux-simple
+
 
+
</code>
+
 
+
To stop the docker instance
+
 
+
<code>
+
sudo docker-compose down
+
</code>
+
* I also worked on creating fixes for the circleci interface of ns-3-dce and created a PR for the same too. I'll keep working on it based on Matt Sir's suggestions : https://github.com/direct-code-execution/ns-3-dce/pull/115
+

Revision as of 20:25, 12 June 2021

Main Page - Current Development - Developer FAQ - Tools - Related Projects - Project Ideas - Summer Projects

Installation - Troubleshooting - User FAQ - HOWTOs - Samples - Models - Education - Contributed Code - Papers

Back to GSoC 2021 projects

Project Overview

  • Project Name: Direct Code Execution Modernization
  • Student: Parth Pratim Chatterjee
  • Mentors: Tom Henderson, Apoorva Bhargava, Vivek Jain
  • Project Goals: DCE currently makes use of net-next-nuse to extend the Linux kernel internals like the networking stack to host applications but over the years the project hasn't been updated with the latest releases of the Linux kernel. As Linux progressed with newer releases, a major part of the source code changed, making previous glue code incompatible with the newer implementations of the network stack as some of the init calls and function usage changed significantly making migration to newer releases non-trivial. This project aims at enabling support for latest Linux kernel features and toolchains in the DCE environment with support for the socket networking stack, sysctl interfaces, system call access, etc. without any changes to the user APIs currently being used by host applications. The project aims at incorporating the LKL(Linux Kernel Library) into the DCE environment for host applications to effortlessly make use of Linux kernel stacks with minimum to no change in existing simulation scripts.
  • Repository:
  • About Me: I'm a freshman Computer Science undergraduate student at Kalinga Institute of Industrial Technology, Bhubaneshwar, India. I have a keen interest in Linux internals and computer networking. I was a grand prize winner at Google Code-In, 2018 for ns-3 organization, which helped me initially get introduced to DCE. I have an aptitude for Competitive Programming and heavily make use of C/C++, STL and other OOP concepts in solving algorithmic puzzles. I have an experience with C/C++ and Python of more than 3 years, working on projects for numerous Hackathons.

Milestones and Deliverables

The overall project goal is to update DCE such that the latest Linux systems are supported and the latest Linux kernel code could be used.

  • Detailed Project Plan (will be continuously updated throughout the GSoC program duration)
  • Phase 1
    • Ubuntu 20.04 support
      • Goal is that most capabilities presently available for Ubuntu 16 DCE will be available for Ubuntu 20.04 (native)
      • Also produce Docker image and documentation to ease installation process
      • Also contact glibc maintainers about a non-patched solution
    • Upgrade net-next-nuse Linux kernel support to recent kernel
      • Focus is on the Google BBRv2 kernel (5.10 base): https://github.com/google/bbr
      • Borrow from net-next-nuse-4.4.0 and LKL as appropriate to try to get a new version of net-next-nuse
      • Review existing tests and define/write new tests
      • More about this here
    • Investigate SMP architecture for LKL by querying the LKL developers list
      • More about this here
  • Phase 2
    • To be determined based on Phase 1 results
  • Phase 3
    • To be determined

Weekly Reports

  • Community Bonding Period (May 17 - June 7)
    • Figured out possible Scheduling bottlenecks in LKL in blocking network calls. Section 4.5
    • Developed a beta docker port for ns-3-dce. Section 1.1
    • Discussed the bright side of porting the latest Linux kernel using the net-next-nuse architecture.
    • Discussed possible regression tests for verifying both performance and results.
  • Week 1 (June 7 - June 14)
    • Implemented the first Linux Kernel-5.12 port for DCE. Section 5
    • Passed 5 tests/examples. Section 5.12
    • Initiate talks with LKL team to get reviews on a possible SMP port of LKL. Section 4.5