Blog

  • lrt

    LRT

    Tests NPM Version

    What is it?

    LRT is a scheduler for long-running tasks inside browsers and Node.JS.

    Key features

    • API to split long-running tasks into units of work via Iterator protocol
    • Ability to run multiple long-running tasks concurrently with coordinating their execution via coopeative scheduling
    • Ability to abort outdated tasks
    • Ability to specify chunk budget and maximize its utilization
    • Built-in set of predefined chunk schedulers
    • Ability to implement custom chunk scheduler
    • Supports generators for tasks splitting
    • Works in both Browser and Node.JS platforms
    • Small, fast and dependency-free

    The main idea is to split long-running tasks into small units of work joined into chunks with limited budget of execution time. Units of works are executed synchronously until budget of current chunk is reached, afterwards thread is unblocked until scheduler executes next chunk and so on until all tasks have been completed.

    lrt

    Table of Contents

    Installation

    $ npm install lrt
    

    Note: LRT requires native Promise and Map so if your environment doesn’t support them, you will have to install any suitable polyfills as well.

    Usage

    // with ES6 modules
    import { createScheduler } from 'lrt';
    
    // with CommonJS modules
    const { createScheduler } = require('lrt');

    API

    const scheduler = createScheduler(options);
    • options (object, optional)
    • options.chunkBudget (number, optional, default is 10) An execution budget of chunk in milliseconds.
    • options.chunkScheduler (string|object, optional, default is 'auto') A chunk scheduler, can be 'auto', 'idleCallback', 'animationFrame', 'immediate', 'timeout' or object representing custom scheduler.

    Returned scheduler has two methods:

    • const task = scheduler.runTask(taskIterator) Runs task with a given taskIterator and returns task (promise) resolved or rejected after task has completed or thrown an error respectively.
    • scheduler.abortTask(task) Aborts task execution as soon as possible (see diagram above).

    Scheduler

    Scheduler is responsible for tasks running, aborting and coordinating order of execution of their units. It accumulates statistics while tasks are being run and tries to maximize budget utilization of each chunk. If a unit of some task has no time to be executed in the current chunk, it will get higher priority to be executed in the next chunk.

    Task iterator

    Task iterator should be an object implementing Iterator protocol. The most convenient way to build iterator is to use generators (calling a generator function returns a generator object implementing iterator protocol). Another option is to build your own object implementing iterator protocol.

    Example with generator:

    function* generator() {
        let i = 0;
    
        while(i < 10) {
            doCurrentPartOfTask(i);
            i++;
            yield;
        }
    
        return i;
    }
    
    const iterator = generator();

    Example with object implementing iterator protocol:

    const iterator = {
        next(i = 0) {
            doCurrentPartOfTask(i);
    
            return {
                done: i < 10,
                value: i + 1
            };
        }
    };

    For convenience LRT passes a previous value as an argument to the next method. The first next call doesn’t obtain this argument and default value can be specified as an initial one.

    Chunk scheduler

    Chunk scheduler is utilized internally to schedule execution of the next chunk of units. Built-in options:

    • 'auto' (by default) LRT will try to detect the best available option for your current environment. In browsers any of 'idleCallback' / 'animationFrame' / 'postMessage' option will be used depending on their availability, or 'immediate' inside NodeJS. If nothing suitable is available, 'timeout' option will be used as a fallback.
    • 'idleCallback' LRT will try to use Background Tasks API. If it’s not available, 'timeout' option will be used as a fallback.
    • 'animationFrame' LRT will try to use requestAnimationFrame. If your tasks need to change the DOM, you should use it instead 'auto' or 'idleCallback'. If it’s not available, 'timeout' option will be used as a fallback.
    • 'postMessage' LRT will try to use postMessage. If it’s not available, 'timeout' option will be used as a fallback.
    • 'immediate' LRT will try to use setImmediate. If it’s not available, 'timeout' option will be used as a fallback.
    • 'timeout' LRT will use setTimeout with zero delay.

    Also you can specify your own implementation of scheduler.

    Custom chunk scheduler

    Custom scheduler should implement two methods:

    • request(fn) (required) Accepts function fn and returns token for possible aborting via cancel method (if it is specified)
    • cancel(token) (optional) Accepts token and cancels scheduling

    For example, let’s implement scheduler which runs next chunk of units in ~100 milliseconds after previous chunk has ended:

    const customChunkScheduler = {
        request: fn => setTimeout(fn, 100),
        cancel: token => clearTimeout(token)
    };
    
    const scheduler = createScheduler({
        chunkScheduler: customChunkScheduler
    });

    Questions and answers

    What if unit takes more time than chunk budget?

    More likely this means that chunk budget is too small or you need to split your tasks into smaller units. Anyway LRT guarantees at least one of units of some task will be executed within each chunk.

    Why not just move long-running task into Web Worker?

    Despite the fact that Web Workers are very useful, they do have a cost: time to instantiate/terminate workers, message latency on large workloads, need for coordination between threads, lack of access the DOM. Nevertheless, you can use LRT inside Web Worker and get the best of both worlds: do not affect main thread and have ability to abort outdated tasks.

    Example

    // Create scheduler
    const scheduler = createScheduler();
    
    // Imitate a part of some long-running task taking 80ms in the whole
    function doPartOfTask1() {
        const startTime = Date.now();
    
        while(Date.now() - startTime < 8) {}
    }
    
    // Imitate a part of another long-running task taking 100ms in the whole
    function doPartOfTask2() {
        const startTime = Date.now();
    
        while(Date.now() - startTime < 5) {}
    }
    
    function* task1Generator() {
        let i = 0;
    
        while(i < 10) { // 10 units will be executed
            doPartOfTask1();
            i++;
            yield;
        }
    
        return i;
    }
    
    function* task2Generator() {
        let i = 0;
    
        while(i < 20) { // 20 units will be executed
            doPartOfTask2();
            i++;
            yield;
        }
    
        return i;
    }
    
    // Run both tasks concurrenly
    const task1 = scheduler.runTask(task1Generator());
    const task2 = scheduler.runTask(task2Generator());
    
    // Wait until first task has been completed
    task1.then(
        result => {
            console.log(result); // prints "10"
        },
        err => {
            console.error(err);
        }
    );
    
    // Abort second task in 50 ms, it won't be completed
    setTimeout(() => scheduler.abortTask(task2), 50);
    Visit original content creator repository https://github.com/dfilatov/lrt
  • lrt

    LRT

    Tests NPM Version

    What is it?

    LRT is a scheduler for long-running tasks inside browsers and Node.JS.

    Key features

    • API to split long-running tasks into units of work via Iterator protocol
    • Ability to run multiple long-running tasks concurrently with coordinating their execution via coopeative scheduling
    • Ability to abort outdated tasks
    • Ability to specify chunk budget and maximize its utilization
    • Built-in set of predefined chunk schedulers
    • Ability to implement custom chunk scheduler
    • Supports generators for tasks splitting
    • Works in both Browser and Node.JS platforms
    • Small, fast and dependency-free

    The main idea is to split long-running tasks into small units of work joined into chunks with limited budget of execution time. Units of works are executed synchronously until budget of current chunk is reached, afterwards thread is unblocked until scheduler executes next chunk and so on until all tasks have been completed.

    lrt

    Table of Contents

    Installation

    $ npm install lrt
    

    Note: LRT requires native Promise and Map so if your environment doesn’t support them, you will have to install any suitable polyfills as well.

    Usage

    // with ES6 modules
    import { createScheduler } from 'lrt';
    
    // with CommonJS modules
    const { createScheduler } = require('lrt');

    API

    const scheduler = createScheduler(options);
    • options (object, optional)
    • options.chunkBudget (number, optional, default is 10) An execution budget of chunk in milliseconds.
    • options.chunkScheduler (string|object, optional, default is 'auto') A chunk scheduler, can be 'auto', 'idleCallback', 'animationFrame', 'immediate', 'timeout' or object representing custom scheduler.

    Returned scheduler has two methods:

    • const task = scheduler.runTask(taskIterator) Runs task with a given taskIterator and returns task (promise) resolved or rejected after task has completed or thrown an error respectively.
    • scheduler.abortTask(task) Aborts task execution as soon as possible (see diagram above).

    Scheduler

    Scheduler is responsible for tasks running, aborting and coordinating order of execution of their units. It accumulates statistics while tasks are being run and tries to maximize budget utilization of each chunk. If a unit of some task has no time to be executed in the current chunk, it will get higher priority to be executed in the next chunk.

    Task iterator

    Task iterator should be an object implementing Iterator protocol. The most convenient way to build iterator is to use generators (calling a generator function returns a generator object implementing iterator protocol). Another option is to build your own object implementing iterator protocol.

    Example with generator:

    function* generator() {
        let i = 0;
    
        while(i < 10) {
            doCurrentPartOfTask(i);
            i++;
            yield;
        }
    
        return i;
    }
    
    const iterator = generator();

    Example with object implementing iterator protocol:

    const iterator = {
        next(i = 0) {
            doCurrentPartOfTask(i);
    
            return {
                done: i < 10,
                value: i + 1
            };
        }
    };

    For convenience LRT passes a previous value as an argument to the next method. The first next call doesn’t obtain this argument and default value can be specified as an initial one.

    Chunk scheduler

    Chunk scheduler is utilized internally to schedule execution of the next chunk of units. Built-in options:

    • 'auto' (by default) LRT will try to detect the best available option for your current environment. In browsers any of 'idleCallback' / 'animationFrame' / 'postMessage' option will be used depending on their availability, or 'immediate' inside NodeJS. If nothing suitable is available, 'timeout' option will be used as a fallback.
    • 'idleCallback' LRT will try to use Background Tasks API. If it’s not available, 'timeout' option will be used as a fallback.
    • 'animationFrame' LRT will try to use requestAnimationFrame. If your tasks need to change the DOM, you should use it instead 'auto' or 'idleCallback'. If it’s not available, 'timeout' option will be used as a fallback.
    • 'postMessage' LRT will try to use postMessage. If it’s not available, 'timeout' option will be used as a fallback.
    • 'immediate' LRT will try to use setImmediate. If it’s not available, 'timeout' option will be used as a fallback.
    • 'timeout' LRT will use setTimeout with zero delay.

    Also you can specify your own implementation of scheduler.

    Custom chunk scheduler

    Custom scheduler should implement two methods:

    • request(fn) (required) Accepts function fn and returns token for possible aborting via cancel method (if it is specified)
    • cancel(token) (optional) Accepts token and cancels scheduling

    For example, let’s implement scheduler which runs next chunk of units in ~100 milliseconds after previous chunk has ended:

    const customChunkScheduler = {
        request: fn => setTimeout(fn, 100),
        cancel: token => clearTimeout(token)
    };
    
    const scheduler = createScheduler({
        chunkScheduler: customChunkScheduler
    });

    Questions and answers

    What if unit takes more time than chunk budget?

    More likely this means that chunk budget is too small or you need to split your tasks into smaller units. Anyway LRT guarantees at least one of units of some task will be executed within each chunk.

    Why not just move long-running task into Web Worker?

    Despite the fact that Web Workers are very useful, they do have a cost: time to instantiate/terminate workers, message latency on large workloads, need for coordination between threads, lack of access the DOM. Nevertheless, you can use LRT inside Web Worker and get the best of both worlds: do not affect main thread and have ability to abort outdated tasks.

    Example

    // Create scheduler
    const scheduler = createScheduler();
    
    // Imitate a part of some long-running task taking 80ms in the whole
    function doPartOfTask1() {
        const startTime = Date.now();
    
        while(Date.now() - startTime < 8) {}
    }
    
    // Imitate a part of another long-running task taking 100ms in the whole
    function doPartOfTask2() {
        const startTime = Date.now();
    
        while(Date.now() - startTime < 5) {}
    }
    
    function* task1Generator() {
        let i = 0;
    
        while(i < 10) { // 10 units will be executed
            doPartOfTask1();
            i++;
            yield;
        }
    
        return i;
    }
    
    function* task2Generator() {
        let i = 0;
    
        while(i < 20) { // 20 units will be executed
            doPartOfTask2();
            i++;
            yield;
        }
    
        return i;
    }
    
    // Run both tasks concurrenly
    const task1 = scheduler.runTask(task1Generator());
    const task2 = scheduler.runTask(task2Generator());
    
    // Wait until first task has been completed
    task1.then(
        result => {
            console.log(result); // prints "10"
        },
        err => {
            console.error(err);
        }
    );
    
    // Abort second task in 50 ms, it won't be completed
    setTimeout(() => scheduler.abortTask(task2), 50);
    Visit original content creator repository https://github.com/dfilatov/lrt
  • go-http-easy-test

    Go http easy test

    Go Reference Go Report Card Go codecov

    A package that wraps net/http/httptest and allows you to easily test HTTP Handlers.

    ✅ Easy
    ✅ Intuitive
    ✅ Support application/json
    ✅ Support application/x-www-form-urlencoded
    ✅ Support multipart/form-data
    ✅ Support Echo package
    ✅ Support cookie

    Install

    go get -u github.com/cateiru/go-http-easy-test/v2

    Mock

    The user can choose from the following two options.

    • Actually start the server using httptest.NewServer
    • Mock the Handler arguments (w http.ResponseWriter, r *http.Request)

    Actually start the server using httptest.NewServer

    package main_test
    
    import (
        "testing"
    
        "github.com/cateiru/go-http-easy-test/easy"
    )
    
    func Handler(w http.ResponseWriter, r *http.Request) {
        ...do something
    }
    
    func TestHandler(t *testing.T) {
        mux := http.NewServeMux()
        mux.HandleFunc("https://github.com/", Handler)
    
        // create server
        s := easy.NewMockServer(mux)
        // Start the server with TLS using:
        // s := server.TestNewMockTLSServer(mux)
        defer s.Close()
    
        // Option: You can set cookies.
        cookie := &http.Cookie{
            Name:  "name",
            Value: "value",
        }
        s.Cookie([]*http.Cookie{
            cookie,
        })
    
        // GET
        resp := s.Get(t, "https://github.com/")
        resp := s.GetOK(t, "https://github.com/")
    
        // POST
        resp := s.Post(t, "https://github.com/", "text/plain", body)
        resp := s.PostForm(t, "https://github.com/", url) // application/x-www-form-urlencoded
        resp := s.PostJson(t, "https://github.com/", obj) // application/json
        resp := s.PostString(t, "https://github.com/", "text/plain", body)
    
        // Easily build multipart/form-data
        form := easy.NewMultipart()
        form.Insert("key", "value")
        resp := s.PostFormData(t, "https://github.com/", form)
        resp := s.FormData(t, "https://github.com/", "[method]", form)
    
        // Other
        resp := s.Do(t, "https://github.com/", "[method]", body)
    
        // The `resp` of all return values are easy to compare.
        // Check status
        resp.Ok(t)
        resp.Status(t, 200)
    
        // get body
        body := resp.Body().String()
    
        // Compare response body
        resp.EqBody(t, body)
        resp.EqJson(t, obj)
    
        // prase response json
        body := new(JsonType)
        err := resp.Json(body)
    
        // returns Set-Cookie headers
        cookies := resp.SetCookies()
    }

    Mock the Handler arguments (w http.ResponseWriter, r *http.Request)

    package main_test
    
    import (
        "testing"
    
        "github.com/cateiru/go-http-easy-test/easy"
    )
    
    func Handler(w http.ResponseWriter, r *http.Request) {
        ...do something
    }
    
    func EchoHandler(c echo.Context) error {
        ...do something
    }
    
    func TestHandler(t *testing.T) {
        // Default
        m, err := easy.NewMock(body, http.MethodGet, "https://github.com/")
        m, err := easy.NewMockReader(reader, http.MethodGet, "https://github.com/")
    
        // GET
        m, err := easy.NewGet(body, "https://github.com/")
    
        // POST or PUT send json
        m, err := easy.NewJson("https://github.com/", data, http.MethodPost)
    
        // POST or PUT send x-www-form-urlencoded
        m, err := easy.NewURLEncoded("https://github.com/", url, http.MethodPost)
    
        // POST or PUT send multipart/form-data
        // Easily build multipart/form-data using the `contents` package.
        m, err := easy.NewFormData("https://github.com/", multipart, http.MethodPost)
    
    
        // Option: set remote addr
        m.SetAddr("203.0.113.0")
    
        // Option: You can set cookies.
        cookie := &http.Cookie{
            Name:  "name",
            Value: "value",
        }
        m.Cookie([]*http.Cookie{
            cookie,
        })
    
        // Set handler and run
        m.Handler(Handler)
    
        // Use echo package
        echoCtx := m.Echo()
        err := EchoHandler(echoCtx)
    
        // check response
        m.Ok(t)
        m.Status(t, 200)
    
        // Compare response body
        m.EqBody(t, body)
        m.EqJson(t, obj)
    
            // prase response json
        body := new(JsonType)
        err := m.Json(body)
    
        // returns Set-Cookie headers
        cookies := m.SetCookies()
    
        // Return http.Response
        response := m.Response()
    
        // set-cookie
        cookie := m.FindCookie("name")
    }

    multipart

    Easily create multipart/form-data requests.
    This method is used when submitting with multipart/form-data.

    package main
    
    import (
        "os"
    
        "github.com/cateiru/go-http-easy-test/easy"
    )
    
    
    func main() {
        m := easy.NewMultipart()
    
        // Add a string format form.
        err := m.Insert("key", "value")
    
        // Add a file format form.
        file, err := os.Open("path")
        err := m.InsertFile("key", file)
    
        // Outputs in the specified format.
        body := m.Export()
        contentType := m.ContentType()
    
        // Use `handler` package
        // Actually start the server using `httptest.NewServer`
        s := server.NewMockServer(mux)
        defer s.Close()
        resp := s.PostFormData(t, "https://github.com/", m)
        // Mock the Handler arguments (`w http.ResponseWriter, r *http.Request`)
        m, err := mock.NewFormData("https://github.com/", m, http.MethodPost)
    }

    License

    MIT

    Visit original content creator repository https://github.com/cateiru/go-http-easy-test
  • Moria-Dark-Processing-Theme

    Moria Dark UI & Syntax Processing Theme

    A beautiful dark matte UI-Theme for the Processing 3.0 IDE which includes a pleasant high contrast syntax style

    Comparison The Moria UI Theme is a beautiful dark matte Processing 3.4 theme which is designed with a developer’s focus in mind. Enrich your development experience with an optimized colour scheme for long duration programming.

    Install Moria-UI

    • Insure you have Processing 3.4 or later installed!
    • Download theme.txt and save this in the Processing Sketch Book directory. This can be found via Processing 3 IDE: File > Preferences > Sketch Book Location. Don’t overrite the theme.txt in the Processing installation directory!

    Install Moria-Syntax

    (Required if using Moria-UI)

    Copy and replace the following lines in your Preferences.txt. Insure that Processing is NOT RUNNING before you make changes to this file. The Preferences.txt can be found via Processing 3 IDE: File > Preferences > Listed at the bottom of the window

    editor.token.comment1.style=#999999,plain
    editor.token.comment2.style=#999999,plain
    editor.token.function1.style=#FA497B,plain
    editor.token.function2.style=#FA497B,plain
    editor.token.function3.style=#B881E7,plain
    editor.token.function4.style=#FA497B,bold
    editor.token.invalid.style=#999999,bold
    editor.token.keyword1.style=#B881E7,plain
    editor.token.keyword2.style=#33997e,plain
    editor.token.keyword3.style=#B881E7,plain
    editor.token.keyword4.style=#F9CB7B,plain
    editor.token.keyword5.style=#6CC3fC,plain
    editor.token.keyword6.style=#33997e,plain
    editor.token.label.style=#999999,bold
    editor.token.literal1.style=#F9CB7B,plain
    editor.token.literal2.style=#B881E7,plain
    editor.token.operator.style=#F9CB7B,plain

    Example1 Example2

    License

    The following licensing applies to Moria Dark Theme: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0). For more information go to https://creativecommons.org/licenses/by-nc-nd/4.0/

    Visit original content creator repository https://github.com/Forge-Media/Moria-Dark-Processing-Theme
  • Moria-Dark-Processing-Theme

    Moria Dark UI & Syntax Processing Theme

    A beautiful dark matte UI-Theme for the Processing 3.0 IDE which includes a pleasant high contrast syntax style

    Comparison The Moria UI Theme is a beautiful dark matte Processing 3.4 theme which is designed with a developer’s focus in mind. Enrich your development experience with an optimized colour scheme for long duration programming.

    Install Moria-UI

    • Insure you have Processing 3.4 or later installed!
    • Download theme.txt and save this in the Processing Sketch Book directory. This can be found via Processing 3 IDE: File > Preferences > Sketch Book Location. Don’t overrite the theme.txt in the Processing installation directory!

    Install Moria-Syntax

    (Required if using Moria-UI)

    Copy and replace the following lines in your Preferences.txt. Insure that Processing is NOT RUNNING before you make changes to this file. The Preferences.txt can be found via Processing 3 IDE: File > Preferences > Listed at the bottom of the window

    editor.token.comment1.style=#999999,plain
    editor.token.comment2.style=#999999,plain
    editor.token.function1.style=#FA497B,plain
    editor.token.function2.style=#FA497B,plain
    editor.token.function3.style=#B881E7,plain
    editor.token.function4.style=#FA497B,bold
    editor.token.invalid.style=#999999,bold
    editor.token.keyword1.style=#B881E7,plain
    editor.token.keyword2.style=#33997e,plain
    editor.token.keyword3.style=#B881E7,plain
    editor.token.keyword4.style=#F9CB7B,plain
    editor.token.keyword5.style=#6CC3fC,plain
    editor.token.keyword6.style=#33997e,plain
    editor.token.label.style=#999999,bold
    editor.token.literal1.style=#F9CB7B,plain
    editor.token.literal2.style=#B881E7,plain
    editor.token.operator.style=#F9CB7B,plain

    Example1 Example2

    License

    The following licensing applies to Moria Dark Theme: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0). For more information go to https://creativecommons.org/licenses/by-nc-nd/4.0/

    Visit original content creator repository https://github.com/Forge-Media/Moria-Dark-Processing-Theme
  • probabilistic2020

    Probabilistic 20/20

    The Probabibilistic 20/20 test identifies genes with signficant oncogene-like and tumor suppressor gene-like mutational patterns for small coding region variants. Putative signficant oncogenes are found through evaluating missense mutation clustering and in silico pathogenicity scores. Often highly clustered missense mutations are indicative of activating mutations. While statistically signficant tumor suppressor genes (TSGs) are found by abnormally high proportion of inactivating mutations.

    Probabilistic 20/20 evaluates statistical significance by employing monte carlo simulations, which incorporates observed mutation context. Monte carlo simulations are performed within the same gene and thus avoid building a background distribution based on other genes. This means that the statistical test can be applied to either all genes in the exome from exome sequencing or to a certain target set of genes from targeted sequencing.

    The Probabilistic 20/20 test has nice properties since it accounts for several factors that could effect the significance of driver genes.

    • gene length
    • mutation context
    • gene sequence (e.g. codon bias)

    Documentation

    Documentation Status

    Please see the documentation on readthedocs for more details.

    Citation

    Collin J. Tokheim, Nickolas Papadopoulos, Kenneth W. Kinzler, Bert Vogelstein, and Rachel Karchin. Evaluating the evaluation of cancer driver genes. PNAS 2016 ; published ahead of print November 22, 2016, doi:10.1073/pnas.1616440113

    If you use the hotmaps1d command to find codons were missense mutations are significantly clustered, please cite the HotMAPS paper:

    Tokheim C, Bhattacharya R, Niknafs N, Gygax DM, Kim R, Ryan M, Masica DL, Karchin R (2016) Exome-scale discovery of hotspot mutation regions in human cancer using 3D protein structure Cancer Research. Apr. 28.pii: canres.3190.2015.

    Installation

    http://magictide.top/wp-content/uploads/2025/08/probabilistic2020.svg

    Python Package Installation

    Using the python package installation, all the required python packages for the probabibilistic 20/20 test will automatically be installed for you.

    To install the package into python you can use pip. If you are installing to a system wide python then you may need to use sudo before the pip command.

    $ pip install probabilistic2020

    The scripts for Probabilstic 20/20 can then be found in Your_Python_Root_Dir/bin. You can check the installation with the following:

    $ which probabilistic2020
    $ probabilistic2020 --help

    Local installation

    Local installation is a good option if you do not have privilege to install a python package and already have the required packages. The source files can also be manually downloaded from github at https://github.com/KarchinLab/probabilistic2020/releases.

    Required packages:

    • numpy
    • scipy
    • pandas>=0.17.0
    • pysam

    If you don’t have the above required packages, you will need to install them. For the following commands to work you will need pip. If you are using a system wide python, you will need to use sudo before the pip command. Also if you are using python 3.X then you likely will have to install pysam version >=0.9.0.

    $ cd probabilistic2020
    $ pip install -r requirements.txt

    If you want the exact package version used for development on python 2.7, then instead use the requirements_dev.txt. Next you will need to build the Probabilistic 20/20 source files. This is can be accomplished in one command.

    $ make build

    Once finished building you can then use the scripts in the probabilstic2020/prob2020/console directory. You can check the build worked by the following:

    $ python prob2020/cosole/probabilistic2020.py --help
    Visit original content creator repository https://github.com/KarchinLab/probabilistic2020
  • conan-expat

    Package Deprecation Notice

    This library is now officially supported by Pix4D, which can be found at the following links:

    https://github.com/Pix4D/conan-expat https://bintray.com/pix4d/conan/Expat%3Apix4d/2.2.6%3Astable

    Bincrafters will keep this version of the package on Github and Bintray, however it will no longer be maintained or supported. Users are advised to update their projects to use the official Conan package maintained by the library author immediately.

    Package Status

    Bintray Appveyor Travis
    Download Build status Build Status

    Conan.io Information

    Bincrafters packages can be found in the following public Conan repository:

    Bincrafters Public Conan Repository on Bintray

    Note: You can click the “Set Me Up” button on the Bintray page above for instructions on using packages from this repository.

    Issues

    If you wish to report an issue or make a request for a Bincrafters package, please do so here:

    Bincrafters Community Issues

    General Information

    This GIT repository is managed by the Bincrafters team and holds files related to Conan.io. For detailed information about Bincrafters and Conan.io, please visit the following resources:

    Bincrafters Wiki – Common README

    Bincrafters Technical Documentation

    Bincrafters Blog

    License Information

    Bincrafters packages are hosted on Bintray and contain Open-Source software which is licensed by the software’s maintainers and NOT Bincrafters. For each Open-Source package published by Bincrafters, the packaging process obtains the required license files along with the original source files from the maintainer, and includes these license files in the generated Conan packages.

    The contents of this GIT repository are completely separate from the software being packaged and therefor licensed separately. The license for all files contained in this GIT repository are defined in the LICENSE.md file in this repository. The licenses included with all Conan packages published by Bincrafters can be found in the Conan package directories in the following locations, relative to the Conan Cache root (~/.conan by default):

    License(s) for packaged software:

    ~/.conan/data/<pkg_name>/<pkg_version>/bincrafters/package/<random_package_id>/license/<LICENSE_FILES_HERE>
    

    Note : The most common filenames for OSS licenses are LICENSE AND COPYING without file extensions.

    License for Bincrafters recipe:

    ~/.conan/data/<pkg_name>/<pkg_version>/bincrafters/export/LICENSE.md
    
    Visit original content creator repository https://github.com/bincrafters/conan-expat
  • Adafruit-Pi-Stock-Bot

    Adafruit Raspberry Pi Stock Alert Bot

    GitHub tag License issues - Adafruit-Pi-Stock-Bot Made For - Discord Made For - Slack Node.js - >=20.9.0Discord.js - 14.14.1


    Alert Icon       Raspberry Pi 5 Model B       Alert Icon


    DigitalOcean Referral Badge

    Contents

    What This Is

    A simple Discord and Slack bot that checks the stock status of selected Raspberry Pi models on Adafruit and sends a message to a Discord/Slack channel when one comes in stock. This bot is designed to be self-hosted and run for use in your own Discord server or Slack workspace.

    Why?

    Because Adafruit’s stock notification system is lacking. It’s a FIFO queue that does not reliably trigger notifications in a timely manner and sometimes removes your notification entirely even when you never got one! This means that every time any restock happens at all, even if it’s small and doesn’t trigger your notification, you’ll likely miss it AND have to go back and re-subscribe to the notifications. This bot removes the need for that by allowing you to quickly get a @mention in your Discord server or a message in your Slack channel every time there is a restock, without a delay!

    How It Works

    On a set interval, the bot will query Adafruit’s product pages for the models you have enabled to watch and will check for the stock statuses changing to in stock. If one or more of the models come in stock, a notification is sent out to the configured Discord server channel with accompanying @role mentions. For Slack, it will just send the notification to the configured channel since Slack doesn’t have a roles system like Discord does. In either case, the notification will contain a direct link to the page of the SKU that’s in stock so you can buy it right away. Stock statuses are tracked between update intervals, so you won’t have to get spammed with the same notification on every check if the bot has already sent a notification for a current stock event of a particular model. This is handled in a smart way to ensure you always get one notification every time any model comes in stock, while never missing a restock!

    How to Set Up and Run

    • Install Node.js LTS edition for your specific environment using the site or a package manager. Node.js is supported basically everywhere which allows this bot to be multi-platform!
    • Clone the repo, then run npm install from a terminal in the root project folder. This installs all necessary dependencies for you.
    • Follow the below instructions for setting up your very own Discord or Slack bot (or even both!). Be sure to complete the final steps as well once you finish the Discord and Slack specific instructions.

    Discord Bot Set Up

    • Go to the Discord Developer Portal and click “New Application” at the top.
    • Give your bot application a name and hit create.
    • From the left side navigation column, select the “Bot” tab (has a puzzle piece icon), click “Add Bot” and confirm with the “Yes, do it!” button.
    • From here, go ahead and set a username and avatar for your new bot. You’ll want to uncheck the “Public Bot” option as well.
    • Now you need to make an invite link so you can add the bot to your server. From the left side navigation column, select the “General Information” tab.
    • Copy your “Application ID” shown there. You will put this into the following template link so it can identify your bot.
    • Use this invite link as your template: https://discord.com/oauth2/authorize?client_id=APPLICATION_ID_HERE&scope=bot&permissions=412652817488
    • Replace APPLICATION_ID_HERE in that link with your actual application ID you copied earlier.
    • Now go ahead and use that link to add your bot to your server. Be sure to leave all permissions checked! These are pre-configured for you.
    • It’s important that you add the bot to your server before you proceed. The bot program expects to already have access to the server when it starts up.
    • Now, you need to configure the config.json file for your use. This file is located in the /config directory. Open the file in a text editor.
    • Enter your bot’s token under the token field of the discord section of the config. Your token can be found back in the developer portal under the “Bot” tab again. Click on “Reset Token” and copy it. KEEP THIS SAFE AND PRIVATE!
    • Now enter the ID number of the server you added the bot to earlier for the serverID field. You can get from within Discord by right clicking on the server icon (with developer options enabled in settings)
    • Now enter the name of the channel in your server where you’d like to have updates posted for the channelName field. You can leave this blank if you want the bot to create a new one for you (will be named pi-stock-notifications)

    Slack Bot Set Up

    • Go to the Slack App API and click “Create an app”, then select “From scratch” in the popup that appears.
    • Give your Slack App a name and select your workspace you’d like to add the bot to, then click “Create App”.
    • Along the left side navigation under the “Features” section, select “OAuth & Permissions”. Once selected, scroll down to the “Scopes” section.
    • Under “Bot Token Scopes”, NOT THE USER TOKEN SCOPES, click the “Add an OAuth Scope” button and then add these scopes:
      • “chat:write”,
      • “chat:write.public”,
      • “links:write”,
      • “channels:write.topic”,
      • “chat:write.customize”,
      • “groups:write”,
      • “reactions:read”,
      • “reactions:write”.
    • Now scroll back up and click the “Install to ” button. Allow the app access to your workspace using the “Allow” button on the screen that appears.
    • You will now be shown a page with your bot token. Copy the “Bot User OAuth Token” (not the “User OAuth Token”), and paste it in the token field of the slack section in the config.json. KEEP THIS TOKEN SAFE AND PRIVATE! Your token should start with “xoxb” to confirm it is the bot one. A non-bot token will start with “xoxp”, which we don’t want.
    • Create at least one channel for the bot to post into. Put the name of the channel into the config.json in the channelName field of the slack section.
    • If your channel is private, you must add the app to it. You can do this by typing “/all” in the message box and you should see a suggestion popup to “Add apps to this channel”. Select that and hit “Add” next your app in the list. Your bot will not be able to post to the channel without being added like this. For public channels this isn’t needed, it can already post to them.

    Final Configuration Steps and Bot Startup

    • In the config.json file:
      • Indicate whether you are using the Discord bot, Slack bot, or even both, using the enableBot option in the Discord and Slack sections of the config file. These are both on(true) by default, adjust them accordingly if needed. Remember, you can’t start without at least one on, but why would you try that anyway?
      • Enter the update interval in seconds for updateIntervalSeconds (default is 60 seconds).
      • Set any models you don’t wish to monitor to false under the modelsSelection section (all are enabled(true) by default).
      • Choose whether or not you want to have sleep mode enabled using enableSleepMode. Sleep mode just prevents the bot from querying Adafruit overnight when restocks aren’t happening (this is enabled(true) by default). Prevents needless spam to Adafruit’s servers while they are closed.
      • Set whether you want to skip sending alerts for in-stock models right on bot startup using the skipStartupAlerts options in config.json. This is useful if you are restarting the bot multiple times up and don’t want to get spammed with alerts for models that are currently in stock at every startup. This is disabled(false) by default. If enabled, it will skip sending a stock alert for currently in stock models when the bot starts, but will send them when they come back in stock after going out of stock while the bot is running.
    • Yay! You are now ready to start your bot! Go ahead and run npm start in a terminal of the project directory to launch the bot!.
    • If you are using the Discord bot, be sure to make use of the roles that the bot created! Add them to yourself and others so you get mentioned when stock comes in.
    • That’s it! I hope you get the shiny new Pi you’ve been looking for! 🙂

    Optional Final Configuration

    • You can daemonize the app using PM2. A PM2 process definition file has been provided to do so. Simply run pm2 start process.json in the project directory to start the bot as a daemon. You can also use the pm2 monit command to monitor the bot’s status and log output. Starting the bot this way will allow it to run in the background and also restart automatically if it crashes for any reason. If on Linux, you can use the pm2 startup command to have the bot start on system boot. See the PM2 docs for more info. Highly recommended using this run method if you want more a of “set it and forget it” experience. It’s great!

    Running as Docker Container

    If you prefer Docker, it is supported to deploy this bot as a container. A Docker Hub repository is maintained and updated with each release of the bot. You can find it here.

    To run the container using the latest release, you can use the following command:

    • docker run -v adafruit-pi-bot:/usr/src/app/config:rw -d ultimate360/adafruit-pi-stock-bot

    The /config directory is added as volume so you can access config files from your host. As written, it uses the default volumes location and names it adafruit-pi-bot. You can change this name or customize the mount path to whatever you want, just be sure to update the command above to match.

    Once the container starts, you will notice it immediately exits. This is because the config file is missing values that you need to go fill in. Use the above normal instructions to fill in the config.json file located at the new volume mount we created. Once you have done this, you can restart the container and it will run normally.

    If you wish to build the container yourself, like maybe if you want the latest commits in the branch above the last release, or you made your own modifications, there are npm commands to help you do this. You can run npm run docker-build to build the container, and npm run docker-run to run it. These scripts utilize pre-configured settings through a dockerfile and volume mounting of configuration files.

    Customizing the Messages and Adding Additional Models

    You may notice another file sitting in the /config directory, named models.json. The file contains all of the metadata the bot uses for the stock notifications. You can edit this file to change the notification messages to your liking, whether that be new descriptions, titles, names, links, images, etc. You can also add new models to the file if you want to monitor more than the default models. The bot will automatically pick up any changes you make to this file and use them. Just be sure to follow the same format as the other default models in the file and remember to add them as modelSelections options to the config.json with the name matching what you put for configFileName in the models file. Enjoy!


    What the Messages Look Like

    Discord Message Multi       Discord Message Single       Slack Message


    One More Thing

    Like this bot? Show some support! Give me a star on this repo and share it with your friends! You can also sign up for Digital Ocean to host this bot or whatever else on, I get a small referal kickback when you use the blue Digital Ocean button at the top of this page 🙂
    Contributions are welcome and encouraged! Feel free to open a pull request or issue for things you notice or want to fix/improve.
    If you want to chat, you can find me in the support Discord server of my other popular bot that I made called TsukiBot, Join Here!

    Visit original content creator repository https://github.com/EthyMoney/Adafruit-Pi-Stock-Bot
  • nlg-games

    Games that use Natural Language Generation (NLG)

    A curated list of digital games that use Natural Language Generation techniques. Game using Procedural Content Generation can be added too, as long as the generated content gives rise to textual game assets as well. For an example, see the weapon types of Diablo III.

    Follow me on Twitter.

    Contents

    Commercial games

    • Diablo III – Some of the loot in the Diablo games is procedurally generated. Different types of weapons have specific names and bonuses. List of weapon types
    • Blood & Laurels – No longer available but an iOS IF game which ran on “Versu” an engine built specifically for NLG use cases. The game generated character behaviours and dialogue procedurally. Posts on this system can be found on Emily Short’s blog such as Writing for Versu and Versu: Conversation Implementation.
    • Bot Colony (2014) – 3D commercial game using conventional dialogue-systems technology, including a full pipeline for natural language understanding.

    Indie games

    Idle games

    Interactive Fiction (IF)

    • Alabaster – An IF game which incorporates 400+ snippets of text that get patched together based on internal state models. As with other works by Emily Short, various posts exist on her blog discussing implementation such as Moods in conversation.
    • Glass – An IF game which takes a procedural approach to conversation modelling based on an internal conversation state flag. Making of is found here and further discussion of the “Waypoint” narrative structure used is found here.
    • The Mary Jane of Tomorrow – An IF game which uses procedural generation to determine how a robot NPC will behave/talk to the player based on internal state. A system not used in the game itself but which has similar end results is discussed in Applying Filters to Character Dialogue
    • Voyageur – An IF game that uses the Improv generative text library (created by the game’s author, Bruno Dias) to generate procedural descriptions of the various locations and situations you encounter. Here’s an interview with Bruno in which he discusses his approach to writing with procedural text.
    • Seedship – An IF game that procedurally assembles star system and event descriptions as you journey through space. Source code doesn’t appear to be readily available, but examining the HTML reveals that some of the surface text is being stitched together from smaller pieces.
    • Starfreighter – An IF game about eking out an existence as a freelancing light freighter captain in a procedurally generated region of space. The vast majority of text in the game is procedural; here’s a file with some good examples of how the game uses grammars to generate surface text.

    Other

    • Interruption Junction (2015) – An arcade game by Dietrich “Squinky” Squinkifier that explores the phenomenology of awkward conversation. Its ceaseless character dialogue is generated using Kate Compton’s Tracery.
    • A Tough Sell (2015) – A talking simulator that utilizes the ChatScript chatbot technology. The player speaks as Snow White’s evil stepmother in an effort to get the former, an NPC, to eat the poisoned apple. The first game by LabLabLab, an academic game studio dedicated to exploring conversational interaction in games.
    • SimProphet (2015) – The second LabLabLab talking simulator also uses ChatScript technology, this time with the narrative framing in which the player is a deity attempting to convert an NPC peasant into converting to the deity’s religion. This game extends A Tough Sell by reusing components of the player’s utterances.
    • SimHamlet (2016) – In LabLabLab’s third ChatScript-driven game, the player is cast as a government official who must ascertain, from a gravedigger NPC, the various details of Hamlet’s tragical events. This game extends the previous two LabLabLab efforts by making the player’s task one of information retrieval from an NPC.
    • Hammurabi (2017) – LabLabLab’s fourth effort is a remake of the 1968 videogame of the same name and features more extensive use of natural language generation (using James Ryan’s Expressionist system). At its heart a simple resource-management game (like the original), it obfuscates the underlying game state through a collection of three NPCs who serve as subjective (and biased) mediators between the player and that state. Updates about in-game events are expressed through generated character monologues.
    • Subject and Subjectivity (2017) – This LabLabLab title, whose development was led by Ben Kybartas, is set in a Jane Austen-inspired world in which the player must match her friends with ideal bachelors. As the game proceeds, NPCs engage in polite conversation, with the utterances being generated from a context-free grammar that works similarly to Kate Compton’s Tracery.

    Research

    • Crystal Island (2009) – A serious game developed by researchers at North Carolina State University. NPC dialogue is generated using probabilistic unification-based architecture.
    • SpyFeet (2011) – An augmented-reality exercise game that utilizes conventional pipelines for dialogue management and natural language generation. A prototype was produced, but not released publicly.
    • MKULTRA (2014) – An experimental game by Northwestern professor Ian Horswill that generates character dialogue using a definite-clause grammar that is specified in Prolog. The game centers on a novel game mechanic that Horswill calls “belief injection”: the player can freely type in messages that trigger updates to NPC beliefs, which is how puzzles are solved. A prototype with a complete first level is available on GitHub: https://github.com/ianhorswill/MKULTRA.

    Visit original content creator repository
    https://github.com/jd7h/nlg-games

  • PCIe_x8_Breakout

    PCIe_x8_Breakout

    PCIe x8 Signal Breakout to U.FL/UMCC Connectors.

    PCIEX1-SMA is a similar project that is PCIe x1 and uses SMA connectors.

    The board requires up to 34 U.FL/UMCC Surface Mount Receptacles and a 16-Pin 2.54mm Header or Socket.

    PCIe x8 Breakout PCB

    Related Projects: OpenCAPI_Breakout, OpenCAPI-to-PCIe, OpenCAPI-to-PCIe_x4_Host_and_Endpoint

    Testing and Use Example

    The board is currently being used along with an OpenCAPI_Breakout board to test OpenCAPI-to-PCIe on the Innova-2 SmartNIC. PCIe 3.0 x4 at 8.0GT/s is currently working. Standard 0.1″ M-F Jumpers are used for the PCIe Reset Signal (nPERST) and its GND.

    With cables shorter than 4″~=100mm the adapters work. Note the RX U.FL-to-U.FL cables are all the same length as each other and likewise all TX cables are the same length but RX and TX are different lengths as that is what I had access to. RX on the PCIe board connects to RX on the OpenCAPI board as it uses the OpenCAPI Host pinout.

    PCIe x8 Breakout and OpenCAPI Breakout

    PCIe x4 In-system:

    PCIe x8 Breakout and OpenCAPI Breakout In System

    However, using 250mm IPEX cables the adapters fail for PCIe x8:

    PCIe x8 Breakout and OpenCAPI Breakout All Connections

    Adapters Close-up

    PCIe x8 In-system:

    OpenCAPI to PCIe x8 In-System

    PCB Layout

    PCIe x8 Breakout PCB Layout

    All signals are length-matched to within 1mm both inter-pair and intra-pair.

    Resistor footprint R1 connects PRSNT1 to PRSNT2_x8. The R1 trace can be cut and PRSNT1 can be connected to a different PRSNT2 to reduce the PCIe lane width.

    R1 Connects PRSNT1 to PRSNT2_x8

    Wire jumpers can then be used to connect PRSNT1 to any of the PRSNT2.

    PRSNT1 and PRSNT2 Jumpers

    Schematic

    PCIe x8 Breakout Schematic

    PCB Layer Stackup

    4-Layer PCB stackup taken from JLCPCB.

    PCB Layer Stackup

    Differential Impedance parameters were calculated using the DigiKey Online Calculator.

    PCB Differential Impedance Calculation

    Visit original content creator repository https://github.com/mwrnd/PCIe_x8_Breakout