Category: Blog

  • Streamlit-Selenium

    Streamlit Selenium Test

    Streamlit project to test Selenium running in Streamlit Cloud runtime.

    • Local Windows 10 machine works
    • Local Docker container works
    • Streamlit Cloud runtime works, see example app here: Docker

    Issues 🐛

    • Example fails on Streamlit Cloud with a TimeoutException, due to a 403 response, because GeoIP blocking is active on the target website. Therefore a proxy can be enabled optionally to bypass this.
    • However, the proxies are not very reliable, because only free proxies are used here. Therefore, the example is not very stable with enabled proxies and can fail sometimes. Sometimes, no proxies are available.

    ToDo ☑️

    • improve example
    • fix proxy issues
    • try also undetected_chromedriver package
    • try also seleniumbase package

    Problem 🤔

    The suggestion for this repo came from a post on the Streamlit Community Forum.

    https://discuss.streamlit.io/t/issue-with-selenium-on-a-streamlit-app/11563

    It is not that easy to install and use Selenium based webscraper in container based environments.
    On the local computer, this usually works much more smoothly because a browser is already installed and can be controlled by the associated webdriver.
    In container-based environments, however, headless operation is mandatory because no UI can be used there.

    Therefore, in this repository a small example is given to get Selenium working on:

    • Local Windows 10 machine
    • Local Docker container that mimics the Streamlit Cloud runtime
    • Streamlit Community Cloud runtime

    Proxy 😎

    Because some websites block requests based on countries (aka geoip blocking) or from certain IP ranges, a proxy can be used to bypass this. The example app has a checkbox to enable a proxy. You can choose between socks4 and socks5 proxies. However, socks4 does not work at all. The socks5 proxy is a free socks5 proxy from a public list and is not very reliable. Therefore, the example is not very stable with enabled proxies and can fail quite often.

    Pitfalls 🚩

    • To use Selenium (even headless in a container) you need always two components to be installed on your machine:
      • A webbrowser and its associated webdriver.
    • The version of the headless webbrowser and its associated webdriver must always match.
    • If your are using Selenium in a docker container or on Streamlit Cloud, the --headless option is mandatory, because there is no graphical user interface available.
    • There are three options of webbrowser/webdriver combinations for Selenium:
      1. chrome & chromedriver
      2. chromium & chromedriver
      3. firefox & geckodriver
    • Unfortunately in the default Debian Bullseye apt package repositories, not all of these packages are available. If we want an installation from the default repositories, only chromium & chromedriver is left.
    • The chromedriver has a lot of options, that can be set. It may be necessary to tweak these options on different platforms to make headless operation work.
    • The chromedriver, selenium and its options change quite a lot over time. A lot of information on stackoverflow regarding chromedriver/selenium is outdated.
    • The deployment to Streamlit Cloud has unfortunately failed sometimes in the past. A concrete cause of the error or an informative error message could not be identified. Currently it seems to be stable on Streamlit Cloud.
    • To run this streamlit app on Windows, the Windows chromedriver.exe must be stored here in the root folder or added to the Windows PATH. Be aware, that the version of this chromedriver must match the version of your installed Chrome browser.

    Development Setup 🛠️

    In the Streamlit Cloud runtime, neither chrome, chromedriver nor geckodriver are available in the default apt package sources.

    The Streamlit Cloud runtime seems to be very similar to the official docker image python:3.XX-slim-bullseye on Docker Hub, which is based on Debian Bullseye.

    In this repository a Dockerfile is provided that mimics the Streamlit Cloud runtime. It can be used for local testing.

    A packages.txt is provided with the following minimal content:

    chromium
    chromium-driver

    A requirements.txt is provided with the following minimal content:

    streamlit
    selenium

    Docker 🐋

    Docker Container local

    The provided Dockerfile tries to mimic the Streamlit Cloud runtime.

    Build local custom Docker Image from Dockerfile

    docker build --progress=plain --tag selenium:latest .

    Run custom Docker Container

    docker run -ti -p 8501:8501 --rm selenium:latest
    docker run -ti -p 8501:8501 --rm selenium:latest /bin/bash
    docker run -ti -p 8501:8501 -v $(pwd):/app --rm selenium:latest  # linux
    docker run -ti -p 8501:8501 -v ${pwd}:/app --rm selenium:latest  # powershell
    docker run -ti -p 8501:8501 -v %cd%:/app --rm selenium:latest    # cmd.exe

    Selenium 👁️

    https://selenium-python.readthedocs.io/getting-started.html

    pip install selenium

    Chromium 🕸️

    Required packages to install

    apt install chromium
    apt install chromium-driver

    Chromium Options

    https://peter.sh/experiments/chromium-command-line-switches/

    undetected_chromedriver 🤷‍♂️

    Another option to try, not yet done…

    Status ✔️

    Last changed: 2024-06-13

    Visit original content creator repository

  • SwiftSummarize

    SwiftSummarize

    SwiftSummarize is the easiest way to create a summary from a String. Internally it’s a simple wrapper around CoreServices SKSummary

    Before

    Here’s to the crazy ones. The misfits. The rebels. The troublemakers. The round pegs in the square holes. The ones who see things differently. They’re not fond of rules. And they have no respect for the status quo. You can quote them, disagree with them, glorify or vilify them. About the only thing you can’t do is ignore them. Because they change things. They push the human race forward. And while some may see them as the crazy ones, we see genius. Because the people who are crazy enough to think they can change the world, are the ones who do.

    After

    Because the people who are crazy enough to think they can change the world, are the ones who do

    Install

    Add this url to your dependencies:

    https://github.com/StefKors/SwiftSummarize
    

    Example

    let input = """
    Here's to the crazy ones. The misfits. The rebels. The troublemakers. The
    round pegs in the square holes. The ones who see things differently. They're not
    fond of rules. And they have no respect for the status quo. You can quote them,
    disagree with them, glorify or vilify them. About the only thing you can't do is ignore
    them. Because they change things. They push the human race forward. And while some
    may see them as the crazy ones, we see genius. Because the people who are crazy
    enough to think they can change the world, are the ones who do.  
    """
    
    let summary = Summary(text, numberOfSentences: 1)
    
    print(summary.output)
    // Because the people who are crazy enough to think they can change the world, are the ones who do

    Or use it directly on Strings with the extension

    let input = """
    Here's to the crazy ones. The misfits. The rebels. The troublemakers. The
    round pegs in the square holes. The ones who see things differently. They're not
    fond of rules. And they have no respect for the status quo. You can quote them,
    disagree with them, glorify or vilify them. About the only thing you can't do is ignore
    them. Because they change things. They push the human race forward. And while some
    may see them as the crazy ones, we see genius. Because the people who are crazy
    enough to think they can change the world, are the ones who do.  
    """
    
    let output = input.summarize(numberOfSentences: 1)
    
    print(output)
    // Because the people who are crazy enough to think they can change the world, are the ones who do

    A full SwiftUI code example can be found at /Example/ExampleSwiftUI.swift

    preview

    Visit original content creator repository

  • MoGliFC-Flight-Controller

    MoGliFC – flight controller and generic CAN-FD interface

    powered by

    GLIWA.com

    WORK IN PROGRESS

    Tapeout expected: 09/2024

    Work based on:

    Overview

    MoGliFC is an STM32H743-based flight controller for quadcopters (‘drones’) as well as aircraft with wings. Winged aircraft typically require servos to be connected to the flight controller while copters typically have no such requirement.

    Quadcopter and Wing

    MoGliFC addresses both use-cases by providing a scored extension board for the servo connectors which also holds the BEC for the servos. When removed, the signal outputs remain on the flight controller and are available e.g. as GPIOs, UARTs or PWM outputs. To enable a “Full Stack” solution for drones incorporating also a dedicated ESC daughter board, the design goal of MoGliFC is such that off-the-shelf ESC boards can be used. (outlook: custom ESC board for MoGliFC with advanced features)

    CAN-FD and Debug

    The second extension option is a larger frame which allows the MoGliFC to be placed in a BOPLA ALUBOS housing with the USB interface as well es the CAN/CAN-FD externally available turning the MogliFC into a general purpose CAN-FD interface. This second extension also holds a SWD (single wire debug) interface for easy debugging.

    PCB mechanical concept

    edit online with draw.io

    Features

    Main

    • MCU: STM32H743 @480MHz

    • GYROscope: BMI270 (Bosch)

    • MAGnetometer: HMC5883 (Honeywell)

    • BAROmeter: DPS310 (Infineon)

    • OnScreenDisplay: AT7456E

    • BLACKBOX Recorder: 128MB

    • expressLRS 3.x true diversity receiver (Gemini compatible)

    I/O

    • 7x UARTs
    • 8x PWM outputs (Wing only)
    • 4 ADCs (Vbat, Current, RSSI, AirSpeed)
    • I2C, SPI, CAN
    • USB-C

    Power

    • 8-25V (3~6S) DC-input
    • BEC 5V 2A cont., 3A peak (FC) (Wing/Debug only)
    • BEC 10V 2A cont., 3A peak (VTX/camera) (Wing/Debug only)
    • BEC Vx 8A cont., 10A peak (servos, 5V with 6/7.4V option) (Wing/Debug only)
    • LDO 3.3V 500mA (external)

    Power concept

    edit online with draw.io

    Other

    • 3x status LEDs
    • Beeper support
    • WS2812 LED support
    • Dual Camera Inputs switch
    • Current Sensor (50A continuous, 160A peak) (Wing/Debug only)
    • Battery Voltage Sensor: 1K:10K (INAV Scale 1100)

    Mechanics

    • Mounting: 30.5 x 30.5mm, Φ4mm with grommets Φ3mm
    • Dimension: 54 x 36 x 13 mm (wing); 36 x 36 x 13 mm (quad)
    • Weight: XXg (Quad: XXg, Wing: YYg)

    Documentation

    Setup, installation

    Hardware

    Software

    Tools

    Contributing

    Contributions are welcome and encouraged. You can contribute in many ways:

    • Documentation updates and corrections.
    • How-To guides – received help? help others!
    • Bug fixes.
    • New features.
    • Telling us your ideas and suggestions.
    • Buying your hardware from this

    Github issue tracker is a good place to search for existing issues or report a new bug/feature request: Before creating new issues please check to see if there is an existing one!

    Developers

    Main developers are:

    Targets

    • set up github space
    • set up development environment
    • develop and prototype board
    • develop inav branch
    • final prototype testing
    • ingest MoGliFC as supported INAV and Betaflight target
    • series production
    Visit original content creator repository
  • netlify-plugin-cypress

    netlify-plugin-cypress

    CircleCI renovate-app badge netlify-plugin-cypress Netlify Status

    Runs Cypress end-to-end tests on Netlify Build

    Install and use

    You can install this plugin in the Netlify UI from this direct in-app installation link or from the Plugins directory.

    For file based installation, add netlify-plugin-cypress NPM package as a dev dependency to your repository.

    npm install --save-dev netlify-plugin-cypress
    # or
    yarn add -D netlify-plugin-cypress

    And then add the plugin’s name to the list of build plugins in netlify.toml file as shown in the examples below.

    note: this plugin assumes you have already installed Cypress as a dev NPM dependency.

    Chromium install

    This plugin installs via Puppeteer Chromium browser, which is also cached inside ./node_modules folder.

    How does it work

    Build steps

    When Netlify Build system runs it performs 2 steps essentially:

    1. builds the site
    2. deploys the site

    Every plugin that wants to perform some actions can do so before the build, after the build (but before the deploy), and after the deploy. The Netlify uses the following names for these events

    "preBuild"
    1. builds the site
    "postBuild"
    2. deploys the site
    "onSuccess"
    "onFailure"
    

    Thus every plugin can register itself to be executed before a site is built using “preBuild” event, or after a successful deploy using “onSuccess” event name, etc.

    This plugin

    This plugin netlify-plugin-cypress by default runs during the “onSuccess” event, testing the deployed site. The Netlify Build system gives the URL to the plugin and it runs Cypress against that URL using the Cypress NPM module API.

    Optionally, you can also run tests during “preBuild” and “postBuild” steps. This is useful if you want to ensure the site is working even before deploying it to Netlify servers. Finally, this plugin does not use “onFailure” event which happens only if Netlify fails to deploy the site.

    Failing the deploy

    Running Cypress tests by default uses the “onSuccess” step of the build pipeline. By this point Netlify has already deployed the site. Even if the tests fail now, the Netlify shows the successful deployment – the site is live! To really prevent the broken deploys, we suggest using Cypress GitHub / GitLab / Bitbucket integration to fail the status checks on a pull request.

    We also suggest running tests during the “preBuild” and/or “postBuild” steps. If the tests fail during these steps, the Netlify fails the entire build and does not deploy the broken site.

    Finally, you can set up Slack notifications on failed tests against the deployed site. At least you will quickly find out if the deployed site fails the E2E tests and would be able to roll back the deploy.

    Examples

    basic

    Here is the most basic Netlify config file netlify.toml with just the Cypress plugin

    [[plugins]]
      # runs Cypress tests against the deployed URL
      package = "netlify-plugin-cypress"

    The example file above should be enough to run Cypress tests in any existing Netlify project.

    recommended

    We strongly recommend setting CYPRESS_CACHE_FOLDER to place the Cypress binary inside the node_modules folder to cache it between builds

    # explicit commands for building the site
    # and the folder to publish
    [build]
    command = "npm run build"
    publish = "build"
    
    [build.environment]
    # cache Cypress binary in local "node_modules" folder
    # so Netlify caches it
    CYPRESS_CACHE_FOLDER = "./node_modules/CypressBinary"
    # set TERM variable for terminal output
    TERM = "xterm"
    
    [[plugins]]
    # runs Cypress tests against the deployed URL
    package = "netlify-plugin-cypress"

    See netlify-plugin-cypress-example repo.

    Typescript users may need to add a install before the build command. For a yarn user with a typescript app, the build section of the Netlify configuration might look like this:

    [build]
    command = "yarn install && yarn build"
    publish = "build"
    
    # ...remaining configuration...

    tutorial

    Read the full tutorial at Test Sites Deployed To Netlify Using netlify-plugin-cypress.

    Note: if any tests against the deployed URL fail, the Netlify build still considers it a success. Thus if you want to have a test check against the deploy, install Cypress GitHub App. The app will provide its own failing status check in this case.

    options

    You can control the browser, the specs to run, record tests on Cypress Dashboard, etc, see manifest.yml file.

    recording

    To record test results and artifacts on Cypress Dashboard, set record: true plugin input and set CYPRESS_RECORD_KEY as an environment variable via Netlify Deploy settings.

    [build]
    command = "npm run build"
    publish = "build"
      [build.environment]
      # cache Cypress binary in local "node_modules" folder
      # so Netlify caches it
      CYPRESS_CACHE_FOLDER = "./node_modules/CypressBinary"
      # set TERM variable for terminal output
      TERM = "xterm"
    
    [[plugins]]
    # runs Cypress tests against the deployed URL
    package = "netlify-plugin-cypress"
      [plugins.inputs]
      record = true

    See cypress-example-kitchensink and recorded results at Cypress Dashboard netlify-plugin-cypress

    Security note 🔐: you should keep your CYPRESS_RECORD_KEY secret. You can control how Netlify builds external pull requests, see the doc – you never want to expose sensitive environment variables to outside builds.

    status checks

    If you are recording test results to Cypress Dashboard, you should also install Cypress GitHub Integration App to see status checks from individual groups or from individual specs per commit. See netlify-plugin-prebuild-example PR #8 pull request for an example.

    Netlify to Cypress Dashboard to GH Integration checks

    group

    You can change the group name for the recorded run using group parameter

    [[plugins]]
    # runs Cypress tests against the deployed URL
    package = "netlify-plugin-cypress"
      [plugins.inputs]
      record = true
      group = "built site"

    tag

    You can give recorded run tags using a comma-separated string. If the tag is not specified, Netlify context will be used (production, deploy-preview or branch-deploy)

    [[plugins]]
    # runs Cypress tests against the deployed URL
    package = "netlify-plugin-cypress"
      [plugins.inputs]
      record = true
      group = "built site"
      tag = "nightly,production"

    spec

    Run only a single spec or specs matching a wildcard

    [build]
    command = "npm run build"
    publish = "build"
      [build.environment]
      # cache Cypress binary in local "node_modules" folder
      # so Netlify caches it
      CYPRESS_CACHE_FOLDER = "./node_modules/CypressBinary"
      # set TERM variable for terminal output
      TERM = "xterm"
    
    [[plugins]]
    # runs Cypress tests against the deployed URL
    package = "netlify-plugin-cypress"
      [plugins.inputs]
      spec = "cypress/integration/smoke*.js"

    See cypress-example-kitchensink for instance.

    browser

    By default all tests run using the Chromium browser. If you want to use Electron:

    [build]
    command = "npm run build"
    publish = "build"
      [build.environment]
      # cache Cypress binary in local "node_modules" folder
      # so Netlify caches it
      CYPRESS_CACHE_FOLDER = "./node_modules/CypressBinary"
      # set TERM variable for terminal output
      TERM = "xterm"
    
    [[plugins]]
    package = "netlify-plugin-cypress"
      [plugins.inputs]
      # allowed values: electron, chromium
      browser = "electron"

    configFile

    If you would like to use a different Cypress config file instead of cypress.json, specify it using the configFile option

    [build]
    command = "npm run build"
    publish = "build"
      [build.environment]
      # cache Cypress binary in local "node_modules" folder
      # so Netlify caches it
      CYPRESS_CACHE_FOLDER = "./node_modules/CypressBinary"
      # set TERM variable for terminal output
      TERM = "xterm"
    
    [[plugins]]
    package = "netlify-plugin-cypress"
      [plugins.inputs]
      configFile = "cypress.netlify.config.js"

    testing SPA routes

    SPAs need catch-all redirect setup to make non-root paths accessible by tests. You can enable this with spa parameter.

    [[plugins]]
    # local Cypress plugin will test our site after it is built
    package = "netlify-plugin-cypress"
      [plugins.inputs]
      # can also use "spa = true" to use "index.html" by default
      spa = "index.html"
    

    See lws-spa for more options and tests/routing example.

    testing the site before build

    By default this plugin tests static site after deploy. But maybe you want to run end-to-end tests against the local development server. You can start the local server, wait for it to respond and then run Cypress tests by passing parameters to this plugin. Here is a sample config file

    [[plugins]]
      package = "netlify-plugin-cypress"
      # let's run tests against development server
      # before building it (and testing the built site)
      [plugins.inputs.preBuild]
        enable = true
        start = 'npm start'
        wait-on = 'http://localhost:3000'
        wait-on-timeout = '30' # seconds

    Parameters you can place into preBuild inputs: start, wait-on, wait-on-timeout, spec, record, group, and tag.

    See netlify-plugin-prebuild-example repo

    testing the site after build

    By default this plugin tests static site after deploy. But maybe you want to run end-to-end tests locally after building the static site. Cypress includes a local static server for this case but you can specify your own command if needed by using the start argument. Here is a sample config file

    [[plugins]]
      package = "netlify-plugin-cypress"
      # let's run tests against the built site
      [plugins.inputs.postBuild]
        enable = true

    Parameters you can place into postBuild inputs: spec, record, group, tag, start and spa.

    The SPA parameter

    If your site requires all unknown URLs to redirect back to the index page, use the spa parameter

    [[plugins]]
      package = "netlify-plugin-cypress"
      # let's run tests against the built site
      [plugins.inputs.postBuild]
        enable = true
        # must allow our test server to redirect unknown routes to "https://github.com/"
        # so that client-side routing can correctly route them
        # can be set to true or "index.html" (or similar fallback filename in the built folder)
        spa = true
        start = 'npm start'

    See the routing example.

    using Netlify CLI

    Even better when testing the prebuilt site is to run the Netlify CLI to make sure the local API redirects and Netlify functions work in addition to the web site. Add netlify-cli as a dev dependency and start it during testing.

    $ npm i -D netlify-cli
    [[plugins]]
      package = "netlify-plugin-cypress"
      # start Netlify server
      [plugins.inputs.preBuild]
        start = 'npx netlify dev'
        wait-on = 'http://localhost:8888'

    For more, see tests/test-netlify-dev example and read Testing Netlify Function section.

    skipping tests

    If you are testing the site before building it and want to skip testing the deployed URL

    [[plugins]]
      package = "netlify-plugin-cypress"
      # do not test the deployed URL
      [plugins.inputs]
        enable = false
      # test the local site
      [plugins.inputs.preBuild]
        enable = true

    parallelization

    Running tests in parallel is not supported because Netlify plugin system runs on a single machine. Thus you can record the tests on Cypress Dashboard, but not run tests in parallel. If Netlify expands its build offering by allowing multiple build machines, we could take advantage of it and run tests in parallel.

    HTML files

    When serving the built folder, we automatically serve .html files. For example, if your folder has the following structure:

    public/
      index.html
      pages/
        about.html
    

    The public folder is served automatically and the following test successfully visits both the root and the about.html pages:

    cy.visit("https://github.com/")
    cy.visit('/pages/about') // visits the about.html

    Example repos

    Name Description
    netlify-plugin-cypress-example Runs Cypress tests on Netlify and records their results to Cypress Dashboard
    netlify-plugin-prebuild-example Runs tests twice, first using the development version of the site, then after Netlify builds the production bundles, runs the tests again
    cypress-example-kitchensink Runs only a subset of all tests before publishing the folder to Netlify
    bahmutov/eleventyone Example used in Test Sites Deployed To Netlify Using netlify-plugin-cypress tutorial
    gatsby-starter-portfolio-cara A Gatsby site example

    Major upgrades

    v1 to v2

    • The default browser has been switched to Chromium. If you want to use the built-in Electron use an explicit option browser
    • We have changed the default testing phase. In v1 the tests executed after building the site by default. In v2 the tests run against the deployed URL by default, and you need to enable the testing during preBuild or postBuild steps.

    Debugging

    Set environment variable DEBUG=netlify-plugin-cypress to see the debug logs. To see even more information, set DEBUG=netlify-plugin-cypress,netlify-plugin-cypress:verbose

    Warning: be careful with verbose logging, since it can print all environment variables passed to the plugin, including tokens, API keys, and other secrets.

    Common problems

    Too many progress messages while installing Cypress If you see A LOT of progress messages during “npm install” step, set an environment variable during build CI = 1 to remove them.
    Cypress binary is installed on every build By default Cypress binary is installed in the home folder, see caching. Netlify build does NOT cache this folder, but it DOES cache the local “node_modules” folder. Tell Cypress to install its binary in the “node_modules” folder by setting build environment variable CYPRESS_CACHE_FOLDER = "./node_modules/CypressBinary".
    Several versions of Cypress are installed according to the build logs From the Netlify UI under Deploys, pick “Trigger Deploy” and select “Clear cache and deploy site”. This should cleanly install new “node_modules” and remove old Cypress versions.
    Term message warnings in the Cypress output If you see messages like tput: No value for $TERM and no -T specified during Cypress run, add an environment variable TERM = xterm.
    Electron browser crashes while running tests Switch to using Chromium browser that seems to be a bit more reliable. Use browser = "chromium" setting.
    You want to skip Puppeteer download If you do not plan on using Chromium to run the tests, if you want to use the built-in Electron browser, you can save time by skipping the Puppeteer download. Set the environment variable PUPPETEER_SKIP_DOWNLOAD = 1 on your CI.

    License

    This project is licensed under the terms of the MIT license.

    Contributing

    Read the contributing guide

    Visit original content creator repository
  • homebrew-chunkwm

    homebrew-chunkwm

    Homebrew Tap for Chunkwm tiling window manager (https://github.com/koekeishiya/chunkwm)

    NOTE: plugins folder has been moved to /usr/local/opt/chunkwm/share/chunkwm/plugins

    Table of contents

    1. Supported OSX versions
    2. Brew Options
    3. Caveats
    4. Usage examples
    5. Plugins

    Supported OSX versions

    Chunkwm works with MacOSX >= 10.11

    Brew options

    --with-logging
    	Redirect stdout and stderr to log files to standard brew path.
    --with-purify
        Build purify plugin.
    --with-tmp-logging
    	Redirect stdout and stderr to /tmp.
    --without-border
    	Do not build border plugin.
    --without-ffm
    	Do not build focus-follow-mouse plugin.
    --without-tiling
    	Do not build tiling plugin.
    --HEAD
    	Install HEAD version
    

    Caveats

    Copy the example configuration into your home directory:

    cp /usr/local/opt/chunkwm/share/chunkwm/examples/chunkwmrc ~/.chunkwmrc
    

    Opening chunkwm will prompt for Accessibility API permissions. After access
    has been granted, the application must be restarted:

    brew services restart chunkwm
    

    If both --with-logging and --with-tmp-logging are specified, the former takes
    precedence over the latter.

    Codesign chunkwm binary

    Accessibility API must be granted after every update to chunkwm, unless you codesign the
    binary with self-signed certificate before restarting

    Create code signing certificate named “chunkwm-cert” using Keychain Access.app:

    • Open Keychain Access.app
    • From menu select Keychain Access -> Certificate Assistant -> Create a certificate
    • Fill the certificate form:
      • Name: chunkwm-cert
      • Identity Type: Self Signed Root
      • Certificate Type: Code Signing

    Sign the binary:

    codesign -fs "chunkwm-cert" /usr/local/opt/chunkwm/bin/chunkwm
    

    To have launchd start crisidev/chunkwm/chunkwm now and restart at login:

    brew services start crisidev/chunkwm/chunkwm
    

    Or, if you don’t want/need a background service you can just run:

    chunkwm
    

    Logging

    If the formula has been built with --with-logging, logs will be found in

    /usr/local/var/log/chunwm/chunkwm.[out|err].log
    

    Usage examples

    Clone tap

    brew tap crisidev/homebrew-chunkwm
    

    Install latest stable version

    brew install chunkwm
    

    Install from git repo

    brew install --HEAD chunkwm
    

    Do not install border, tiling, and ffm plugins

    brew install --without-border --without-tiling --without-ffm chunkwm
    

    Log chunkwm stdout and stderr on /tmp

    brew install --with-tmp-logging chunkwm
    

    Plugins

    In the repo I am also distributing other chunkwm plugins.

    Blur wallpaper when there are open windows.

    brew install --HEAD chunkwm-blur
    

    Visit original content creator repository

  • graph-tools

    graph-tools

    Tools for efficiently detecting subgraph isomorphisms and graph identity using Ullman and NAUTY algorithms in Common Lisp

    (require 'graph-tools)
    (in-package :graph-tools)
    ;;
    ;;
    
    (let* ((graph  (make-graph '((0 1) (0 2) (1 2) (0 3) (2 4))))
            ;; graph:
            ;;           1
            ;;          / \
            ;;         0 - 2
            ;;          \   \
            ;;           3   4
           (subgraph (make-graph '((0 1) (0 2) (1 2))))
            ;; subgraph:
            ;;           1
            ;;          / \
            ;;         0 - 2
           (isomorphisms (find-subgraph-isomorphisms +graph2+ +graph3+)))
      (print isomorphisms)
    )
    ;; 6 isomorphisms:
    #(
      #(0 1 2)
      #(0 2 1)
      #(1 2 0)
      #(1 0 2)
      #(2 0 1)
      #(2 1 0)
    )
    
    (let* ((g (make-graph '((0 1) (0 2) (1 2) (0 3) (2 4))))
      (destructuring-bind (canonical-g reordering) 
        (canonical-graph g)
        (print canonical-g) ;; a graph with identical structure to g
        (print reordering) ;; array of length 5 mapping verticies in canonical-g to g
      )
    ) 

    Graph functions

    (make-graph '((0 1) (0 2) (1 2))) ;; an undirected trianglular graph
    (make-graph '((0 1) (1 2) (2 0)) t) ;; directed triangular graph
    
    (directedp g) ;; is graph directed or undirected?
    
    (graph-directed-edge-p g i1 i1) ;; t if an edge exists from i1 to i2
    (setf (graph-direvted-edge-p g i1 i1) t) ;; connects i1 to i2
    
    (graph-vertex-count g) ;; returns # of verticies
    
    (vertex-paths g v1 v2 &key (max-steps -1) (ignore nil)) 
      ;; returns a list of paths, where each path is a list of verticies starting at v1 and ending in v2
    
    
    (coerce-to-undirected-graph g) ;; adds reverse edges to a directed graph, making it undirected
    
    
    (graph-add-edges g edge-specs)
    (graph-delete-edges g edge-specs) 
    
    (graph-extend-verticies g n) ;; adds n unconnected verticies to the graph
    (graph-delete-verticies g vlist &key not)
    (graph-delete-verticies-if g pred &key not)
    
    (graph-add-graph g1 g2 &optional g1-g2-edges g2-g2-edges) 
      ;; adds two graphs and new edges between the two graphs
      ;; e.g., (graph-add-graph g1 g2 :g1-g2-edges '((0 1)) 
                                      :g2-g1-edges '((1 0)))
      ;;        creates an undirected edge between g1vertex 0 to g2vertex 1
    
    (matrix-of g) ;; gets the underlying bit matrix representing the graph
    
    (reorder-graph g #(2 3 0 1)) ;; returns a new graph with same structure
                                ;; but with verticies reordered
    
    (transpose g) ;; destructively transpose graph g - reverses directed edges
    
    (unconnected-subgraphs g &key (verticies t) output-connected-p)
        ;; Returns a list of subgraphs of g which are disconnected from each other.
        ;; G is a graph
        ;; OUTPUT-CONNECTED is a boolean which if set causes the function to consider
        ;; verticies to be connected when they share an output vertex.  This setting
        ;; only has meaning for directed graphs, since all edges in undirected graphs 
        ;; are bidirectional.  If OUTPUT-CONNECTED-P is NIL, shared output verticies do
        ;; not cause their inputs to be in a connected group; the s
    
    (connected-vertex-groups g &key (verticies t) (max-steps -1) (ignore nil))
      ;; computes the sets of verticies which are connected (by output edges)
    
    

    Ullman subgraph isomorphism functions

    Graphs are represented as bitmatricies (vectors of integers treated as bitfields)

    find-subgraph-isomorphisms

    (find-subgraph-isomorphisms s g &key base-map continue-if vertex-test row-fn)

    Returns a list of where each element represents a subgraph isomorphism.

    Required arguments:

    s – subgraph to find
    g – graph to search

    Optional arguments:

    base-map – allowed mappings from subgraph s verticies to graph g verticies. This parameter can be used to only allow matches between particular verticies. This is a vector where each index represents the corresponding vertex in the subgraph s, and the value is a bitmatrix where each set bit represents an allowed isomorphism to the graph g. An entry containing all bits set means that all mappings are possible; An entry where all bits are 0 means that no mappings are possible between that subgraph vertex and any graph vertex.

    continue-if – lambda which which takes an isomorphism as an argument and returns two booleans (continuep collectp). If collectp is true, the isomorphism is added to the list of returned isomorphisms. If continuep is true, search continues for more isomorphisms.

    vertex-test – predicate used to limit which vertexes in s can match to vertexes in g. It takes arguments (s svertex g gvertex), where s and g are the subgraph and graph and verticies being tests, and returns NIL if svertex cannot map to gvertex.

    row-fn – is an alternative way of computing which mappings are possible, and takes arguments (s svertex g) and returns an integer bitfield representing the indicies of g to which svertex may be mapped.

    Note: if neither vertex-test nor row-fn are provided, a default vertex-test is used which only allows sverticies to map to gverticies with an equal or greater number of outgoing edges.

    find-subgraph-isomorphism-maps

    (find-subgraph-isomorphism-maps s g &key base-map (continue-if t) vertex-test row-fn)

    Identical to find-subgraph-isomorphisms, but returns a list of bit-vectors instead of a list of integer vectors. Useful if you want to avoid the additional overhead of translating from bit-vectors to integer arrays.

    Visit original content creator repository

  • mikrotik-exporter

    mikrotik-exporter

    mikrotik-exporter is a Prometheus exporter written in Go with the goal to export all possible metrics from MikroTik devices.
    It is not predetermined which metrics are collected, you can create your own modules.
    Some modules are shipped with the program, see here.

    Info – RouterOS v7

    The logic supports RouterOS v7, but not all modules have been adapted to the new command structure and parameter names.

    Probing

    Targets can be probed by requesting:

    http://localhost:9436/probe?target=xxx

    The modules defined at the target configuration can be overwritten via the query string:

    http://localhost:9436/probe?target=xxx&modules=interface,health

    For troubleshooting there are also two log levels available:

    http://localhost:9436/probe?target=xxx&debug=1
    http://localhost:9436/probe?target=xxx&trace=1

    Command line flags

    --config.file=config.yml
    --debug
    --trace
    

    Docker image

    The docker image is available on Docker Hub, Quay.io and GitHub.

    docker pull swoga/mikrotik-exporter
    docker pull quay.io/swoga/mikrotik-exporter
    docker pull ghcr.io/swoga/mikrotik-exporter
    

    You just need to map your config file into the container at /etc/mikrotik-exporter/config.yml

    docker run -v config.yml:/etc/mikrotik-exporter/config.yml swoga/mikrotik-exporter
    

    Configuration

    mikrotik-exporter can reload its configuration files at runtime via SIGHUP or by sending a request to /-/reload.

    main config

    [ listen: <string> | default = :9436 ]
    [ metrics_path: <string> | default = /metrics ]
    [ probe_path: <string> | default = /probe ]
    [ reload_path: <string> | default = /-/reload ]
    
    [ namespace: <string> | default = mikrotik ]
    [ username: <string> ]
    [ password: <string> ]
    
    config_files:
      [ - <string> ... | default = ./conf.d/* ]
    
    [ connection_cleanup_interval: <int> | default = 60 ]
    [ connection_use_timeout: <int> | default = 300 ]
    
    targets:
      [ - <target> ... ]
    modules:
      [ - <module> ... ]
    module_extensions:
      [ - <module_extension> ... ]

    conf.d

    targets:
      [ - <target> ... ]
    modules:
      [ - <module> ... ]
    module_extensions:
      [ - <module_extension> ... ]

    <target>

    name: <string>
    address: <string>
    [ username: <string> | default = main.username ]
    [ password: <string> | default = main.password ]
    [ timeout: <int> | default = 10 ]
    [ queue: <int> | default = 1000 ]
    variables:
      [ <string>: <string> ]
    modules:
      [ - <string> ... ]

    <module>

    name: <string>
    commands:
      [ - <command> ... ]

    <template_string>

    fields of this type support value substitution with values of parent variables
    syntax: {name_of_variable}

    <command>

    command: <template_string>
    [ timeout: <int> | default = 10 ]
    [ prefix: <string> ]
    
    metrics:
      [ - <metric> ... ]
    labels:
      [ - <label/variable> ... ]
    variables:
      [ - <label/variable> ... ]
    
    sub_commands:
      [ - <command> ... ]

    <param> base for <metric> and <label/variable>

    # either param_name or value must be set
    [ param_name: <string> ]
    # static value for this param
    [ value: <template_string> ]
    # value used if not found in API response
    [ default: <template_string> ]
    # only relevant for param_type = datetime
    [ datetime_type: tonow / fromnow / timestamp | default = fromnow ]
    # only relevant for param_type = bool
    [ negate: <bool> ]
    
    # remapping is stopped after the first match in remap_values or remap_values_re
    # remapping to null, stops further processing of this parameter
    remap_values:
      [ <string>: <string> / null ]
    remap_values_re:
      [ <regex>: <string> / null ]

    <metric>

    # derives from param
    <param>
    [ param_type: int / bool / timespan / datetime | default = int]
    
    # either metric_name or param_name must be set
    [ metric_name: <string> | default = param_name ]
    metric_type: counter / gauge
    [ help: <string> ]
    
    labels:
      [ - <label/variable> ]

    <label/variable>

    # derives from param
    <param>
    [ param_type: string / int / bool / timespan / datetime | default = int]
    
    # either label_name or param_name must be set
    [ label_name: <string> | default = param_name ]

    <module_extension>

    module extensions are matched by name

    name: <string>
    commands: 
      [ - <command_extension> ... ]

    <command_extension>

    command extensions are matched by command

    command: <string>
    
    metrics:
      [ - <metric_extension> ... ]
    labels:
      [ - <label/variable_extension> ... ]
    variables:
      [ - <label/variable_extension> ... ]
    
    sub_commands:
      [ - <command_extension> ... ]

    <metric_extension>

    metric extensions are matched by metric_name

    # derives from metric
    <metric>
    
    extension_action: add / overwrite / remove

    <label/variable_extension>

    label/variable extensions are matched by label_name

    # derives from label/variable
    <label/variable>
    
    extension_action: add / overwrite / remove

    Visit original content creator repository

  • kenyanroyals

    Kenyan Royals

    Everyone deserves a place to call home

    Kenyan Royals is a charity platform with one goal, to get at least one person off the streets every day.
    Kenyan royals is not based on the idea of calling for exclusive fundraisers or looking for the well off alone to help.
    Kenyan royals is looking for anyone with Ksh 10 to spare everyday or weekly or when you can.
    Simply the same amount you use to buy sweets can be used to get someone off the streets. Here’s how:
    Wahenga walisema haba na haba hujaza kibaba(use google translate). If 1,000 of us was to contribute Ksh 10, that would be Ksh 10,000 raised, the approximate amount needed by Pato to start a mtumba shoes business.
    Pato is just an example and the first person that this project is going to focus on helping.
    But there are many other’s out there with different stories and reasons why they sleep outside or hungry.

    We would all love a home to go to and that’s what we aim to provide, but a home without a means to eat and pay for the home is not enough,
    the aim of this project is to collect enough cash to get these people a house and a small business. This would enable them to stay standing when we help them app.
    This is not the only concern as there are those who are disabled, we get them a wheel chair or another means that they need to work in our society,

    These are some essentials that we only expect our politicians to do and very few are lucky to get such a favour, we seek to help as much people as we can and not by bankrupting ourselves.
    All am asking is for each and everyone to contribute an amount that will never hurt your budget. Ksh 10 is just a recommendation, if you can do better, do it.
    We do not seek to help just one person so we don’t recommend on you putting all your money on one person, please let’s all contribute to help multiple individuals.

    This is becoming a long story for git but if interested visit:
    https://kenyans.herokuapp.com

    Visit original content creator repository

  • rvc_remix

    rvcRemix

    Description

    This program takes any audio/video file, extracts audio, separates vocal and instrumental tracks, applies a RVC model to vocals and re-mix them with the instrumental.

    You’ll need just at least one RVC model ( find some here and extract zip file ) store them in a folder, find an input file and you’re OK!

    Features

    • Generate from video/audio file (any format)
    • Generate from youtube link
    • Pitch shift the instrumental if the rvc voice has pitch shift too
    • Fix pitch unconsistencies between vocal & instru when pitcshifting
    • Do not run separation if output files already exist.
    • Automatically find original pitch and fit the rvc model pitch ( if possible )
    • Config file option to get instrumental and vocal audio file in same dir as original file
    • Edit audio separator models
    • Convert output file to same format as input (if audio, else use mp3)

    Requirements

    Note: On Windows, install preferably these dependencies with Microsoft store

    • python 3.10
    • pip
    • ffmpeg (dont forget on Windows to add the evironment variable )
    • git
    • Windows only C++ 14 Destktop development tools here

    Installation

    • git clone the repo
    • cd rvcRemix
    (optional) you can create a python virtual environnement to avoid the project python libraries to interfere with the ones already present on your system
    • run python -m venv venv
    linux
    • then source venv/bin/activate
    windows
    • if python has not yet the permission to run scripts, run in an admin powershell window : Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser

    • then .\venv\Scripts\Activate.ps1 (if you’re using powershell) or venv\Scripts\activate (if you’re using cmd)

    • pip install -r utils/requirements.txt

    Configure

    • edit the file utils/config.json with a text editor and change the sections :
    • "modelsPath" : "a path to a folder" with the path to the folder you put your models in (preferably each model in its subfolder)
    • "workingDir" : "a path to a folder" with the path to the folder where the temporary files will be put
    • "mode" : "cpu" with the mode to use, “cpu” or “cuda”
    • "keepTempFiles" : false wether or not to keep intermediate temp files
    • "copySeparatedFiles" : true wether or not to copy separated file in the same directory as input; if existing, separation will not be done

    Running

    if you created a virtual environnement
    linux
    • run command source venv/bin/activate
    windows
    • run command .\venv\Scripts\Activate.ps1 in a powershell
    • to process a single file, run command :

    python run.py "path to the audio file" keyword_of_the_rvc_model pitch(optional)

    • to process multiple files, run command :

    python run.py --bulk "path_to_your_bulk_file.json" . File utils/bulk_remix.json provides an example.

    Utilities

    Some useful ressources:

    Compatibility

    Linux, Mac, Windows (as in python)

    Tested systems:
    ArchLinux

    Licensing

    WTFPL.

    This stuff is provided as is with no warranty at all, take your own precautions before using it.

    Visit original content creator repository

  • coredis

    coredis

    docs codecov Latest Version in PyPI ci Supported Python versions


    coredis is an async redis client with support for redis server, cluster & sentinel.

    • The client API uses the specifications in the Redis command documentation to define the API by using the following conventions:

      • Arguments retain naming from redis as much as possible
      • Only optional variadic arguments are mapped to variadic positional or keyword arguments. When the variable length arguments are not optional (which is almost always the case) the expected argument is an iterable of type Parameters or Mapping.
      • Pure tokens used as flags are mapped to boolean arguments
      • One of arguments accepting pure tokens are collapsed and accept a PureToken
    • Responses are mapped between RESP and python types as closely as possible.

    • For higher level concepts such as Pipelines, LUA Scripts, PubSub & Streams abstractions are provided to encapsulate recommended patterns. See the Handbook and the API Documentation for more details.


    Installation

    To install coredis:

    $ pip install coredis

    Feature Summary

    Deployment topologies

    Application patterns

    Server side scripting

    Redis Modules

    Miscellaneous

    Quick start

    Single Node or Cluster client

    import asyncio
    from coredis import Redis, RedisCluster
    
    async def example():
        client = Redis(host='127.0.0.1', port=6379, db=0)
        # or with redis cluster
        # client = RedisCluster(startup_nodes=[{"host": "127.0.01", "port": 7001}])
        await client.flushdb()
        await client.set('foo', 1)
        assert await client.exists(['foo']) == 1
        assert await client.incr('foo') == 2
        assert await client.incrby('foo', increment=100) == 102
        assert int(await client.get('foo')) == 102
    
        assert await client.expire('foo', 1)
        await asyncio.sleep(0.1)
        assert await client.ttl('foo') == 1
        assert await client.pttl('foo') < 1000
        await asyncio.sleep(1)
        assert not await client.exists(['foo'])
    
    asyncio.run(example())

    Sentinel

    import asyncio
    from coredis.sentinel import Sentinel
    
    async def example():
        sentinel = Sentinel(sentinels=[("localhost", 26379)])
        primary = sentinel.primary_for("myservice")
        replica = sentinel.replica_for("myservice")
    
        assert await primary.set("fubar", 1)
        assert int(await replica.get("fubar")) == 1
    
    asyncio.run(example())

    To see a full list of supported redis commands refer to the Command compatibility documentation

    Details about supported Redis modules and their commands can be found here

    Compatibility

    coredis is tested against redis versions >= 7.0 The test matrix status can be reviewed here

    coredis is additionally tested against:

    • uvloop >= 0.15.0

    Supported python versions

    • 3.10
    • 3.11
    • 3.12
    • 3.13
    • PyPy 3.10

    Redis API compatible databases

    coredis is known to work with the following databases that have redis protocol compatibility:

    References

    Visit original content creator repository