Protobuf 原理

protobuf 的 message 中有很多字段,每个字段的格式为: 修饰符 字段类型 字段名 = 域号; 在序列化时,protobuf 按照 TLV 的格式序列化每一个字段,T 即 Tag,也叫 Key;V 是该字段对应的值 v 省略。 序列化后的 Value 是按原样保存到字符串或者文件中,Key 按照一定的转换条件保存起来,序列化后的 message 中字段后面的域号与字段类型来转换。转换公式如下: (field_number << 3) | wire_type wire_type 与类型的对应关系表: | wire_type | meaning | | --------- | ------------- | --------------------------------------------------------- | | 0 | Vaint | int32、int64、uint32、uint64、sint32、sint64、bool、enum | | 1 | 64-bit | fixed、sfixed64、double | | 2 | Length-delimi | string、bytes、embedded、messages、packed repeated fields | | 3 | Start group | Groups(deprecated) | | 4 | End group | Groups(deprecated) | | 5 | 32-bit | fixed32、sfixed32、float | As you can see, each field in the message definition has a unique numbered tag. These tags are used to identify your fields in the message binary format, and should not be changed once your message type is in use. Note that tags with values in the range 1 through 15 take one byte to encode. Tags in the range 16 through 2047 take two bytes. So you should reserve the tags 1 through 15 for very frequently occurring message elements. Remember to leave some room for frequently occurring elements that might be added in the future. ...

2020年3月30日 · 384 字 · sdttttt

Protubuf 原理

protobuf 的 message 中有很多字段,每个字段的格式为: 修饰符 字段类型 字段名 = 域号; 在序列化时,protobuf 按照 TLV 的格式序列化每一个字段,T 即 Tag,也叫 Key;V 是该字段对应的值 v 省略。 序列化后的 Value 是按原样保存到字符串或者文件中,Key 按照一定的转换条件保存起来,序列化后的 message 中字段后面的域号与字段类型来转换。转换公式如下: (field_number « 3) | wire_type wire_type 与类型的对应关系表: wire_type meaning 0 Vaint int32、int64、uint32、uint64、sint32、sint64、bool、enum 1 64-bit fixed、sfixed64、double 2 Length-delimi string、bytes、embedded、messages、packed repeated fields 3 Start group Groups(deprecated) 4 End group Groups(deprecated) 5 32-bit fixed32、sfixed32、float As you can see, each field in the message definition has a unique numbered tag. These tags are used to identify your fields in the message binary format, and should not be changed once your message type is in use. Note that tags with values in the range 1 through 15 take one byte to encode. Tags in the range 16 through 2047 take two bytes. So you should reserve the tags 1 through 15 for very frequently occurring message elements. Remember to leave some room for frequently occurring elements that might be added in the future. ...

2020年3月30日 · 350 字

Github Actions

Github Actions 上传 Releases name: release # https://help.github.com/en/articles/workflow-syntax-for-github-actions#on on: push: tags: - '*' jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v1 - name: "find env" run: | set | grep GITHUB_ | grep -v GITHUB_TOKEN zip -r pkg.zip *.md - uses: xresloader/upload-to-github-release@v1 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: file: "*.md;*.zip" tags: true draft: false prerelease: true overwrite: true verbose: true

2020年3月11日 · 60 字

About Gcr

十多天前, 我创建了 GCR 这个项目, 原因比较纯粹, 我是个命令行工具爱好者, 我认为命令行能带来更好的工作效率以及收益, 我平时编码, 也是遵守 Git 提交规范的, 使用Node.js平台上的git-cz工具来格式化我的提交信息, 不过由于它属于Node.js这个平台, 不可避免, 你需要安装 Node.js 的 runtime 环境. 我想要一种更加方面快速的工具, 所以我建立了 GCR 这个项目, 它是使用Rust编写的, 不需要安装任何环境, 比起 Node, 它会更快, 而且保留了跨平台的特性. 在 GCR 中我还会加入一些比较个性化的元素. GCR 看起来可能会是一个更好用的Git?. 这个项目可能还需要几个星期的时间, 请期待吧.

36 字

Appveyor

Appveyor 也是一款线上 CICD 工具。 Support Contexts: Windows (Default) Ubuntu MacOS Support Languages: Node.js io.js Xamarin Python Ruby C++ Go Ruby version: 1.0.{build}-{branch} skip_commits: files: - 'azure-pipelines.yml' - 'README.md' install: - set PATH=C:\Ruby26-x64\bin;%PATH% - bundle install build: off before_test: - ruby -v - gem -v - bundle -v test_script: - rails db:migrate RAILS_ENV=test Appveyor.yml Reference # Notes: # - Minimal appveyor.yml file is an empty file. All sections are optional. # - Indent each level of configuration with 2 spaces. Do not use tabs! # - All section names are case-sensitive. # - Section names should be unique on each level. #---------------------------------# # general configuration # #---------------------------------# # version format version: 1.0.{build} # you can use {branch} name in version format too # version: 1.0.{build}-{branch} # branches to build branches: # whitelist only: - master - production # blacklist except: - gh-pages # Do not build on tags (GitHub, Bitbucket, GitLab, Gitea) skip_tags: true # Start builds on tags only (GitHub, BitBucket, GitLab, Gitea) skip_non_tags: true # Skipping commits with particular message or from specific user skip_commits: message: /Created.*\.(png|jpg|jpeg|bmp|gif)/ # Regex for matching commit message author: John # Commit author's username, name, email or regexp maching one of these. # Including commits with particular message or from specific user only_commits: message: /build/ # Start a new build if message contains 'build' author: jack@company.com # Start a new build for commit of user with email jack@company.com # Skipping commits affecting specific files (GitHub only). More details here: /docs/appveyor-yml #skip_commits: # files: # - docs/* # - '**/*.html' # Including commits affecting specific files (GitHub only). More details here: /docs/appveyor-yml #only_commits: # files: # - Project-A/ # - Project-B/ # Do not build feature branch with open Pull Requests skip_branch_with_pr: true # Maximum number of concurrent jobs for the project max_jobs: 1 #---------------------------------# # environment configuration # #---------------------------------# # Build worker image (VM template) image: Visual Studio 2015 # scripts that are called at very beginning, before repo cloning init: - git config --global core.autocrlf input # clone directory clone_folder: c:\projects\myproject # fetch repository as zip archive shallow_clone: true # default is "false" # set clone depth clone_depth: 5 # clone entire repository history if not defined # setting up etc\hosts file hosts: queue-server: 127.0.0.1 db.server.com: 127.0.0.2 # environment variables environment: my_var1: value1 my_var2: value2 # this is how to set encrypted variable. Go to "Settings" -> "Encrypt YAML" page in account menu to encrypt data. my_secure_var1: secure: FW3tJ3fMncxvs58/ifSP7w== # environment: # global: # connection_string: server=12;password=13; # service_url: https://127.0.0.1:8090 # # matrix: # - db: mysql # provider: mysql # # - db: mssql # provider: mssql # password: # secure: $#(JFDA)jQ@#$ # this is how to allow failing jobs in the matrix matrix: fast_finish: true # set this flag to immediately finish build once one of the jobs fails. allow_failures: - platform: x86 configuration: Debug - platform: x64 configuration: Release # exclude configuration from the matrix. Works similarly to 'allow_failures' but build not even being started for excluded combination. exclude: - platform: x86 configuration: Debug # build cache to preserve files/folders between builds cache: - packages -> **\packages.config # preserve "packages" directory in the root of build folder but will reset it if packages.config is modified - projectA\libs - node_modules # local npm modules - '%LocalAppData%\NuGet\Cache' # NuGet < v3 - '%LocalAppData%\NuGet\v3-cache' # NuGet v3 # enable service required for build/tests services: - mssql2014 # start SQL Server 2014 Express - mssql2014rs # start SQL Server 2014 Express and Reporting Services - mssql2012sp1 # start SQL Server 2012 SP1 Express - mssql2012sp1rs # start SQL Server 2012 SP1 Express and Reporting Services - mssql2008r2sp2 # start SQL Server 2008 R2 SP2 Express - mssql2008r2sp2rs # start SQL Server 2008 R2 SP2 Express and Reporting Services - mysql # start MySQL 5.6 service - postgresql # start PostgreSQL 9.5 service - iis # start IIS - msmq # start Queuing services - mongodb # start MongoDB # scripts that run after cloning repository install: # by default, all script lines are interpreted as batch - echo This is batch # to run script as a PowerShell command prepend it with ps: - ps: Write-Host 'This is PowerShell' # batch commands start from cmd: - cmd: echo This is batch again - cmd: set MY_VAR=12345 # enable patching of AssemblyInfo.* files assembly_info: patch: true file: AssemblyInfo.* assembly_version: "2.2.{build}" assembly_file_version: "{version}" assembly_informational_version: "{version}" # Automatically register private account and/or project AppVeyor NuGet feeds. nuget: account_feed: true project_feed: true disable_publish_on_pr: true # disable publishing of .nupkg artifacts to account/project feeds for pull request builds publish_wap_octopus: true # disable publishing of Octopus Deploy .nupkg artifacts to account/project feeds #---------------------------------# # build configuration # #---------------------------------# # build platform, i.e. x86, x64, Any CPU. This setting is optional. platform: Any CPU # to add several platforms to build matrix: #platform: # - x86 # - Any CPU # build Configuration, i.e. Debug, Release, etc. configuration: Release # to add several configurations to build matrix: #configuration: # - Debug # - Release # Build settings, not to be confused with "before_build" and "after_build". # "project" is relative to the original build directory and not influenced by directory changes in "before_build". build: parallel: true # enable MSBuild parallel builds project: MyTestAzureCS.sln # path to Visual Studio solution or project publish_wap: true # package Web Application Projects (WAP) for Web Deploy publish_wap_xcopy: true # package Web Application Projects (WAP) for XCopy deployment publish_wap_beanstalk: true # Package Web Applications for AWS Elastic Beanstalk deployment publish_wap_octopus: true # Package Web Applications for Octopus deployment publish_azure_webjob: true # Package Azure WebJobs for Zip Push deployment publish_azure: true # package Azure Cloud Service projects and push to artifacts publish_aspnet_core: true # Package ASP.NET Core projects publish_core_console: true # Package .NET Core console projects publish_nuget: true # package projects with .nuspec files and push to artifacts publish_nuget_symbols: true # generate and publish NuGet symbol packages include_nuget_references: true # add -IncludeReferencedProjects option while packaging NuGet artifacts # MSBuild verbosity level verbosity: quiet|minimal|normal|detailed # scripts to run before build before_build: # to run your custom scripts instead of automatic MSBuild build_script: # scripts to run after build (working directory and environment changes are persisted from the previous steps) after_build: # scripts to run *after* solution is built and *before* automatic packaging occurs (web apps, NuGet packages, Azure Cloud Services) before_package: # to disable automatic builds #build: off #---------------------------------# # tests configuration # #---------------------------------# # to run tests against only selected assemblies and/or categories test: assemblies: only: - asm1.dll - asm2.dll categories: only: - UI - E2E # to run tests against all except selected assemblies and/or categories #test: # assemblies: # except: # - asm1.dll # - asm2.dll # # categories: # except: # - UI # - E2E # to run tests from different categories as separate jobs in parallel #test: # categories: # - A # A category common for all jobs # - [UI] # 1st job # - [DAL, BL] # 2nd job # scripts to run before tests (working directory and environment changes are persisted from the previous steps such as "before_build") before_test: - echo script1 - ps: Write-Host "script1" # to run your custom scripts instead of automatic tests test_script: - echo This is my custom test script # scripts to run after tests after_test: # to disable automatic tests #test: off #---------------------------------# # artifacts configuration # #---------------------------------# artifacts: # pushing a single file - path: test.zip # pushing a single file with environment variable in path and "Deployment name" specified - path: MyProject\bin\$(configuration) name: myapp # pushing entire folder as a zip archive - path: logs # pushing all *.nupkg files in build directory recursively - path: '**\*.nupkg' #---------------------------------# # deployment configuration # #---------------------------------# # providers: Local, FTP, WebDeploy, AzureCS, AzureBlob, S3, NuGet, Environment # provider names are case-sensitive! deploy: # FTP deployment provider settings - provider: FTP protocol: ftp|ftps|sftp host: ftp.myserver.com username: admin password: secure: eYKZKFkkEvFYWX6NfjZIVw== folder: application: active_mode: false beta: true # enable alternative FTP library for 'ftp' and 'ftps' modes debug: true # show complete FTP log # Amazon S3 deployment provider settings - provider: S3 access_key_id: secure: ABcd== secret_access_key: secure: ABcd== bucket: my_bucket folder: artifact: set_public: false # Azure Blob storage deployment provider settings - provider: AzureBlob storage_account_name: secure: ABcd== storage_access_key: secure: ABcd== container: my_container folder: artifact: # Web Deploy deployment provider settings - provider: WebDeploy server: http://www.deploy.com/myendpoint website: mywebsite username: user password: secure: eYKZKFkkEvFYWX6NfjZIVw== ntlm: false remove_files: false app_offline: false do_not_use_checksum: true # do not use check sum for comparing source and destination files. By default checksums are used. sync_retry_attempts: 2 # sync attempts, max sync_retry_interval: 2000 # timeout between sync attempts, milliseconds aspnet_core: true # artifact zip contains ASP.NET Core application aspnet_core_force_restart: true # poke app's web.config before deploy to force application restart skip_dirs: \\App_Data skip_files: web.config on: branch: release platform: x86 configuration: debug # Deploying to Azure Cloud Service - provider: AzureCS subscription_id: secure: fjZIVw== subscription_certificate: secure: eYKZKFkkEv...FYWX6NfjZIVw== storage_account_name: my_storage storage_access_key: secure: ABcd== service: my_service slot: Production target_profile: Cloud artifact: MyPackage.cspkg # Deploying to NuGet feed - provider: NuGet server: https://my.nuget.server/feed api_key: secure: FYWX6NfjZIVw== skip_symbols: false symbol_server: https://your.symbol.server/feed artifact: MyPackage.nupkg # Deploy to GitHub Releases - provider: GitHub artifact: /.*\.nupkg/ # upload all NuGet packages to release assets draft: false prerelease: false on: branch: master # release from master branch only APPVEYOR_REPO_TAG: true # deploy on tag push only # Deploying to a named environment - provider: Environment name: staging on: branch: staging env_var1: value1 env_var2: value2 # scripts to run before deployment before_deploy: # scripts to run after deployment after_deploy: # to run your custom scripts instead of provider deployments deploy_script: # to disable deployment #deploy: off #---------------------------------# # global handlers # #---------------------------------# # on successful build on_success: - do something # on build failure on_failure: - do something # after build failure or success on_finish: - do something #---------------------------------# # notifications # #---------------------------------# notifications: # Email - provider: Email to: - user1@email.com - user2@email.com subject: 'Build {{status}}' # optional message: "{{message}}, {{commitId}}, ..." # optional on_build_status_changed: true # HipChat - provider: HipChat auth_token: secure: RbOnSMSFKYzxzFRrxM1+XA== room: ProjectA template: "{message}, {commitId}, ..." # Slack - provider: Slack incoming_webhook: http://incoming-webhook-url # ...or using auth token - provider: Slack auth_token: secure: kBl9BlxvRMr9liHmnBs14A== channel: development template: "{message}, {commitId}, ..." # Campfire - provider: Campfire account: appveyor auth_token: secure: RifLRG8Vfyol+sNhj9u2JA== room: ProjectA template: "{message}, {commitId}, ..." # Webhook - provider: Webhook url: http://www.myhook2.com headers: User-Agent: myapp 1.0 Authorization: secure: GhD+5xhLz/tkYY6AO3fcfQ== on_build_success: false on_build_failure: true on_build_status_changed: true

1738 字

Azure Pipelines

Azure Pipelines是一种云服务,可用于自动构建和测试您的代码项目并将其提供给其他用户。它几乎适用于任何语言或项目类型。 Azure Pipelines将持续集成(CI)和持续交付(CD)相结合,以持续不断地测试和构建您的代码并将其交付给任何目标。 Azure Pipelines 支持非常多的语言。 Price 如果使用公共项目,则Azure Pipelines是免费的。如果您使用私人项目,则每月可以免费运行多达1800分钟(30小时)的管道作业。了解有关基于并行作业定价的更多信息。 是不是非常的棒呢 o(////▽////)q 请遵循以下基本步骤: 配置Azure Pipelines以使用您的Git存储库。 编辑azure-pipelines.yml文件以定义构建。 将您的代码推送到版本控制存储库。此操作将启动默认触发器以构建和部署,然后监视结果。 Ruby # Ruby # Package your Ruby project. # Add steps that install rails, analyze code, save build artifacts, deploy, and more: # https://docs.microsoft.com/azure/devops/pipelines/languages/ruby trigger: branches: # 只有以下分支提交才会触发CICD include: - master - sdtttttt - CICD - depend* paths: # 只有以下文件提交时不触发CICD exclude: - README.md - appveyor.yml pool: vmImage: 'ubuntu-18.04' steps: - task: UseRubyVersion@0 inputs: # 天杀的,微软提供的Ubuntu 镜像已经不支持 Ruby2.6.3 versionSpec: '>= 2.6.3' # Rails 内置数据库 SQLite3 需要依赖以下工具 - script: sudo apt-get -yqq install libsqlite3-dev libpq-dev displayName: install sqlite3 - script: | gem install bundler bundle install --retry=3 --jobs=4 displayName: 'bundle install' - script: bundle exec rake displayName: 'bundle exec rake'

115 字

ImmortalWrt的编译踩坑

这篇文章会经常更新。 我主要在路由器上使用DAE来进行网络流量处理。 所以必须在系统编译上开启一些BPF的相关构建。 以下是一些我遇到过的报错。 ERROR: package/kernel/bpf-headers failed to build. 这个问题最后抛出的关键信息是 /workdir/openwrt/include/bpf.mk:71: *** ERROR: LLVM/clang version too old. Minimum required: 12, found: . Stop. 只要安装LLVM/clang 12以上的版本就可以。 sudo sh -c 'echo "deb http://apt.llvm.org/focal/ llvm-toolchain-focal-12 main" >> /etc/apt/sources.list' sudo sh -c 'echo "deb-src http://apt.llvm.org/focal/ llvm-toolchain-focal-12 main" >> /etc/apt/sources.list' wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add - sudo apt update -y sudo apt full-upgrade -y sudo apt install -y clang-12 llvm-12

68 字 · sdttttt

New PC

最近入手了一台二手的 16 款 MBP. 配置是 i7-6700|16G|视网膜屏幕|512G 价格是 7000 购买的时候是周四,第二周的星期一到的。 总结来说这台机器的情况比我想得要好,外壳上有一些使用痕迹,这挺正常,机器屏幕和键盘都是完好的,硬件也没有缺陷,驱动都可以正常使用。 还有充电循环只有 1 次,也就是基本是 99 新的机器。 真 tm 是捡到宝了。 虽然这玩意是 16 年的机器,不过使用 X OS 10.15.7 版本还是很流畅的。最新的 Big Sur 就不升级了,感觉会被负优化。 唯一比较有遗憾的就是 MBP 发热都是这么严重的么。打开网页看一会视频,键盘面的额头就开始发烫,摸上去感觉有 60 度了。 这个散热真的大丈夫? 最后说说感想吧,头一次用苹果的产品,整个笔记本无论是硬件还是软件的设计只能用一个词来形容: 性感。

36 字

OSI7层在OS是如何实现以及OS如何处理数据包

这篇文章只是个笔记,大部分都是网上摘的,不过如果后续折腾软路由以及相关协议开发还是相当有用的: 一、硬件层:物理信号的接收与转换 外部流量(如以太网帧)通过物理介质(网线/光纤)传输,最终到达服务器的网络接口卡(NIC,Network Interface Card)。网卡的核心功能是将物理信号(电/光)转换为数字信号,并完成初步的数据封装/校验。 关键硬件处理步骤: 信号接收与数字化 网卡的PHY(物理层芯片)将光/电信号转换为二进制数字信号(比特流),并通过MAC(介质访问控制)子层组装成以太网帧(包含MAC头、IP头、传输层头、数据负载、CRC校验码)。 硬件校验与过滤 CRC校验:网卡硬件自动验证以太网帧尾部的CRC校验码,若校验失败则丢弃该帧(避免无效数据进入系统)。 MAC地址过滤:网卡检查目标MAC地址是否为本机MAC(单播)、广播地址(FF:FF:FF:FF:FF:FF)或多播地址(若网卡启用了多播接收)。非目标MAC的帧会被丢弃(除非网卡处于混杂模式)。 DMA传输到内存 若帧有效,网卡通过DMA(直接内存访问)将数据直接写入内核预分配的接收缓冲区(RX Ring Buffer),避免CPU频繁参与数据拷贝,提升效率。 二、中断与软中断:通知内核处理数据 DMA完成后,网卡通过硬中断(Hardware Interrupt)通知CPU:“有新数据到达!”。为避免硬中断频繁打断CPU(影响性能),现代Linux采用NAPI(New API)机制优化中断处理流程。 关键中断处理步骤: 硬中断触发 网卡的IRQ(中断请求)线向CPU发送信号,CPU暂停当前任务,跳转到网卡驱动注册的中断服务例程(ISR, Interrupt Service Routine)。 ISR快速处理 ISR的主要任务是确认数据已通过DMA写入内存,并触发软中断(Softirq)(一种延迟执行的软中断,由内核线程ksoftirqd处理),然后将CPU控制权交还。 注:现代网卡驱动普遍启用NAPI,ISR仅标记“有数据待处理”,由软中断统一处理批量数据包,减少硬中断次数。 软中断处理 内核的软中断子系统(net_rx_action)被触发,从RX Ring Buffer中读取数据包,并传递给网络驱动进一步处理。 三、驱动层:数据包的初步解析与传递 网卡驱动(如e1000e、ixgbe)负责将硬件层面的原始数据转换为内核可识别的**sk_buff(Socket Buffer)**结构,并传递给上层协议栈。 关键驱动处理步骤: 数据包校验与完整性检查 驱动验证以太网帧的完整性(如CRC已由硬件完成,此处可能跳过),并剥离链路层头部(MAC头),提取IP数据报(若为IPv4/IPv6)。 传递给网络协议栈 驱动将封装好的sk_buff传递给内核网络子系统的入口函数(如netif_rx()),进入协议栈处理流程。 四、内核协议栈:分层处理与路由决策 内核网络协议栈按OSI模型分层处理数据包,从链路层→网络层→传输层逐步向上,最终到达应用层。在此过程中,netfilter等模块会介入进行流量控制或修改。 1. 链路层(Link Layer)处理 功能:处理以太网帧的头部(源/目标MAC地址、类型字段),判断上层协议类型(如IPv4的类型字段为0x0800)。 关键操作: 若帧类型为ARP(地址解析协议),则传递给ARP模块处理(解析IP到MAC的映射)。 若为IP协议(IPv4/IPv6),则剥离链路层头部,将IP数据报传递给网络层。 2. 网络层(Network Layer,IP处理) IP协议栈根据IP头部的目标IP地址,决定数据包是本机接收还是转发(若本机启用了路由功能)。 关键处理步骤: 校验和验证:检查IP头部的校验和(防止传输过程中数据损坏),失败则丢弃。 选项处理:处理IP头部的可选字段(如记录路由、时间戳等,通常默认忽略)。 路由决策: 本地接收:若目标IP是本机配置的IP地址(或广播/多播地址),则进入传输层处理。 转发:若本机是路由器且目标IP不在本地网络,则根据路由表转发至其他接口(需启用ip_forward)。 Netfilter介入点:PREROUTING链 在IP数据报进入网络层后、路由决策前(或路由决策后,取决于数据包方向),netfilter框架的PREROUTING链会被触发。常见操作包括: DNAT(目的地址转换):修改目标IP/端口(如将公网IP映射到内网服务器)。 过滤(Filter):通过iptables/nftables规则丢弃或接受数据包(如阻止特定IP的访问)。 标记(Mark):为数据包打标签(如MARK目标),供后续模块(如tc)使用。 3. 传输层(Transport Layer)处理 根据IP头部的协议字段(如TCP=6,UDP=17),数据包被传递给对应的传输层协议处理模块(如TCP调用tcp_v4_rcv()函数)。 TCP协议处理示例: 校验和验证:检查TCP头部的校验和(基于伪IP头+TCP头+数据计算),失败则丢弃。 端口分发:根据TCP头部的目标端口,找到对应的socket(应用程序通过socket注册的监听端口)。 连接状态管理:若为已建立连接(ESTABLISHED),将数据放入socket的接收缓冲区(sk_buff队列);若为新连接(SYN),则触发三次握手流程。 UDP协议处理示例: 无连接特性,直接根据目标端口查找socket,将数据包放入接收缓冲区(无需建立连接)。 五、流量控制(Traffic Control, tc) tc(Traffic Control)是Linux的流量整形工具,通过qdisc(排队规则)、class(分类)和filter(过滤)对流量进行调度、限速、优先级调整。其介入位置通常在链路层与网络层之间(入队前)或传输层与协议栈之间(出队后),具体取决于配置。 ...

119 字 · sdttttt

Rails Development

Webpacker Rails 6 版本开始依赖 Webpacker,在运行之前必须先安装 Webpacker 这玩意。 rails webpacker:install 如果需要安装前端框架,请使用 yarn 来安装,这样部署的时候能享受到 webpacker 打包便利。 production Rails 6 启动时需要一串 Key 作为加密的 salt,key 不能随意生成。 生成 key 时,请删除 config 下的 credentials.enc.yml 和 master.key 文件。 然后运行 rails credentials:edit 然后 Rails 访问静态资源,需要使用 webpacker 打包编译后的资产。 运行 rails assets:precompile Rails 6 在生产环境下认为你使用 Apache 和 Nginx 缓存编译后的静态资产。如果你不使用他们,需要 # config/environments/production.rb config.public_file_server.enabled = true 记住,打包之后的 js 以及 css 统一叫 application.js/css 在 view 页面引用时需要引用 application 这个名字。其他的会报错

65 字