BiFormer 实验记录

news/2024/7/19 8:49:58 标签: python, 深度学习, 人工智能, transformer

代码来自文中地址

目录

一、前向传播过程

1、Path Embedding

2、BiFormer Block

BRA模块

网络结构


一、前向传播过程

1、Path Embedding

见网络结构部分,4倍下采样

2、BiFormer Block

 对应

python">x = x + self.pos_embed(x)

 对应

python">x = x + self.drop_path(self.attn(self.norm1(x)))

接下来仔细记录其中的细节。

BRA模块

BRA模块的运行需要满足前提条件

python">        else:  # True
            N, H, W, C = x.size()  # 1,56,56,64
            assert H%self.n_win == 0 and W%self.n_win == 0

其中的self.n_win就是 论文中的 S,论文中

 以及论文中的 Algorithm1

# patchify input (H, W, C) -> (Sˆ2, HW/Sˆ2, C)
x = patchify(input, patch_size=H//S)

对应代码

python"># patchify, (n, p^2, w, w, c), keep 2d window as we need 2d pooling to reduce kv size
x = rearrange(x, "n (j h) (i w) c -> n (j i) h w c", j=self.n_win, i=self.n_win)  # (1,49,8,8,64)

论文中的公式3

以及Algorithm1中的

# linear projection of query, key, value
query, key, value = linear_qkv(x).chunk(3, dim=-1)

对应

python">q, kv = self.qkv(x)  # to 103  q (1,49,8,8,64) kv (1,49,8,8,128)

# pixel-wise qkv
# q_pix: (n, p^2, w^2, c_qk)
# kv_pix: (n, p^2, h_kv*w_kv, c_qk+c_v)
q_pix = rearrange(q, 'n p2 h w c -> n p2 (h w) c')  # (1,49,64,64)
kv_pix = self.kv_down(rearrange(kv, 'n p2 h w c -> (n p2) c h w'))  # (49,128,8,8)
kv_pix = rearrange(kv_pix, '(n j i) c h w -> n (j i) (h w) c', j=self.n_win, i=self.n_win)  # (1,49,64,128)

只不过这里将 k v 放在一起了。

 论文中

 对应

python">q_win, k_win = q.mean([2, 3]), kv[..., 0:self.qk_dim].mean([2, 3])  # window-wise qk, (n, p^2, c_qk), (n, p^2, c_qk) q_win (1,49,64) k_win (1,49,64)  这里是k_min所以截至到0:self.dim

这个不是按通道进行平均,而是按 每个区域的8x8个vector进行平均 

代码中接下来会执行

python">lepe = self.lepe(rearrange(kv[..., self.qk_dim:], 'n (j i) h w c -> n c (j h) (i w)', j=self.n_win, i=self.n_win).contiguous())  # (1,64,56,56)
lepe = rearrange(lepe, 'n c (j h) (i w) -> n (j h) (i w) c', j=self.n_win, i=self.n_win)  # (1,56,56,64)

 对应文中的公式7中的LCE(V)

 文本的公式4 和公式5

 分别由

python"> r_weight, r_idx = self.router(q_win, k_win)  # both are (n, p^2, topk) tensors to 51 (1,49,1) (1,49,1)

 返回,里面的前向传播

python">    def forward(self, query:Tensor, key:Tensor)->Tuple[Tensor]:  # q (1,49,64), k (1,49,64)
        """
        Args:
            q, k: (n, p^2, c) tensor
        Return:
            r_weight, topk_index: (n, p^2, topk) tensor
        """
        if not self.diff_routing:  # True
            query, key = query.detach(), key.detach()
        query_hat, key_hat = self.emb(query), self.emb(key)  # per-window pooling -> (n, p^2, c) (1,49,64)
        attn_logit = (query_hat*self.scale) @ key_hat.transpose(-2, -1)  # (n, p^2, p^2)  (1,49,49)
        topk_attn_logit, topk_index = torch.topk(attn_logit, k=self.topk, dim=-1)  # (n, p^2, k), (n, p^2, k) (1,49,1) (1,49,1)
        r_weight = self.routing_act(topk_attn_logit)  # (n, p^2, k) (1,49,1)
        
        return r_weight, topk_index

 self.emb 为 Identity 恒等函数,然后进行公式4 attention,拿出top k 个,self.routing_act为softmax激活函数。这个过程,可以看到,经过attention的输出为 (1,49,49),第一个49表是区域数,第二个49表示每一个区域与其它区域的 affifinity graph 分数,按通道 -1拿出top k 个 最大分数和索引。

文中公式6

 对应

python">kv_pix_sel = self.kv_gather(r_idx=r_idx, r_weight=r_weight, kv=kv_pix)  #(n, p^2, topk, h_kv*w_kv, c_qk+c_v)  (1,49,1,64,128)
k_pix_sel, v_pix_sel = kv_pix_sel.split([self.qk_dim, self.dim], dim=-1)  # (1,49,1,64,64)  (1,49,1,64,64)

self.kv_gather没啥好说的。

然后进行多头 self attention,对应论文中的公式7

python">k_pix_sel = rearrange(k_pix_sel, 'n p2 k w2 (m c) -> (n p2) m c (k w2)', m=self.num_heads)  # flatten to BMLC, (n*p^2, m, topk*h_kv*w_kv, c_kq//m) transpose here?  (49,2,32,64)
v_pix_sel = rearrange(v_pix_sel, 'n p2 k w2 (m c) -> (n p2) m (k w2) c', m=self.num_heads) # flatten to BMLC, (n*p^2, m, topk*h_kv*w_kv, c_v//m)  (49,2,64,32)
q_pix = rearrange(q_pix, 'n p2 w2 (m c) -> (n p2) m w2 c', m=self.num_heads) # to BMLC tensor (n*p^2, m, w^2, c_qk//m)  (49,2,64,32)

# param-free multihead attention
attn_weight = (q_pix * self.scale) @ k_pix_sel # (n*p^2, m, w^2, c) @ (n*p^2, m, c, topk*h_kv*w_kv) -> (n*p^2, m, w^2, topk*h_kv*w_kv)  (49,2,64,64)
attn_weight = self.attn_act(attn_weight)  # (49,2,64,64)
out = attn_weight @ v_pix_sel # (n*p^2, m, w^2, topk*h_kv*w_kv) @ (n*p^2, m, topk*h_kv*w_kv, c) -> (n*p^2, m, w^2, c)  (49,2,64,32)
out = rearrange(out, '(n j i) m (h w) c -> n (j h) (i w) (m c)', j=self.n_win, i=self.n_win,
                        h=H//self.n_win, w=W//self.n_win)  # (1,56,56,64)

out = out + lepe  # (1,56,56,64)

 这里将 n 和SxS 合并在一起了,也就是batch和 区域数,最后输出重新reshape回 H W。

最后再经过一个线性层输出。

接下来进入 mlp模块,没什么好说的。

网络结构

BiFormer--T

python">BiFormer(
  (downsample_layers): ModuleList(
    (0): Sequential(
      (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): GELU()
      (3): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): Sequential(
      (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (2): Sequential(
      (0): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (3): Sequential(
      (0): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (stages): ModuleList(
    (0): Sequential(
      (0): Block(
        (pos_embed): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64)
        (norm1): LayerNorm((64,), eps=1e-06, elementwise_affine=True)
        (attn): BiLevelRoutingAttention(
          (lepe): Conv2d(64, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=64)
          (router): TopkRouting(
            (emb): Identity()
            (routing_act): Softmax(dim=-1)
          )
          (kv_gather): KVGather()
          (qkv): QKVLinear(
            (qkv): Linear(in_features=64, out_features=192, bias=True)
          )
          (wo): Linear(in_features=64, out_features=64, bias=True)
          (kv_down): Identity()
          (attn_act): Softmax(dim=-1)
        )
        (norm2): LayerNorm((64,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=64, out_features=192, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=192, out_features=64, bias=True)
        )
        (drop_path): Identity()
      )
      (1): Block(
        (pos_embed): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64)
        (norm1): LayerNorm((64,), eps=1e-06, elementwise_affine=True)
        (attn): BiLevelRoutingAttention(
          (lepe): Conv2d(64, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=64)
          (router): TopkRouting(
            (emb): Identity()
            (routing_act): Softmax(dim=-1)
          )
          (kv_gather): KVGather()
          (qkv): QKVLinear(
            (qkv): Linear(in_features=64, out_features=192, bias=True)
          )
          (wo): Linear(in_features=64, out_features=64, bias=True)
          (kv_down): Identity()
          (attn_act): Softmax(dim=-1)
        )
        (norm2): LayerNorm((64,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=64, out_features=192, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=192, out_features=64, bias=True)
        )
        (drop_path): Identity()
      )
    )
    (1): Sequential(
      (0): Block(
        (pos_embed): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128)
        (norm1): LayerNorm((128,), eps=1e-06, elementwise_affine=True)
        (attn): BiLevelRoutingAttention(
          (lepe): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=128)
          (router): TopkRouting(
            (emb): Identity()
            (routing_act): Softmax(dim=-1)
          )
          (kv_gather): KVGather()
          (qkv): QKVLinear(
            (qkv): Linear(in_features=128, out_features=384, bias=True)
          )
          (wo): Linear(in_features=128, out_features=128, bias=True)
          (kv_down): Identity()
          (attn_act): Softmax(dim=-1)
        )
        (norm2): LayerNorm((128,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=128, out_features=384, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=384, out_features=128, bias=True)
        )
        (drop_path): Identity()
      )
      (1): Block(
        (pos_embed): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128)
        (norm1): LayerNorm((128,), eps=1e-06, elementwise_affine=True)
        (attn): BiLevelRoutingAttention(
          (lepe): Conv2d(128, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=128)
          (router): TopkRouting(
            (emb): Identity()
            (routing_act): Softmax(dim=-1)
          )
          (kv_gather): KVGather()
          (qkv): QKVLinear(
            (qkv): Linear(in_features=128, out_features=384, bias=True)
          )
          (wo): Linear(in_features=128, out_features=128, bias=True)
          (kv_down): Identity()
          (attn_act): Softmax(dim=-1)
        )
        (norm2): LayerNorm((128,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=128, out_features=384, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=384, out_features=128, bias=True)
        )
        (drop_path): Identity()
      )
    )
    (2): Sequential(
      (0): Block(
        (pos_embed): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
        (norm1): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (attn): BiLevelRoutingAttention(
          (lepe): Conv2d(256, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=256)
          (router): TopkRouting(
            (emb): Identity()
            (routing_act): Softmax(dim=-1)
          )
          (kv_gather): KVGather()
          (qkv): QKVLinear(
            (qkv): Linear(in_features=256, out_features=768, bias=True)
          )
          (wo): Linear(in_features=256, out_features=256, bias=True)
          (kv_down): Identity()
          (attn_act): Softmax(dim=-1)
        )
        (norm2): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=256, out_features=768, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=768, out_features=256, bias=True)
        )
        (drop_path): Identity()
      )
      (1): Block(
        (pos_embed): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
        (norm1): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (attn): BiLevelRoutingAttention(
          (lepe): Conv2d(256, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=256)
          (router): TopkRouting(
            (emb): Identity()
            (routing_act): Softmax(dim=-1)
          )
          (kv_gather): KVGather()
          (qkv): QKVLinear(
            (qkv): Linear(in_features=256, out_features=768, bias=True)
          )
          (wo): Linear(in_features=256, out_features=256, bias=True)
          (kv_down): Identity()
          (attn_act): Softmax(dim=-1)
        )
        (norm2): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=256, out_features=768, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=768, out_features=256, bias=True)
        )
        (drop_path): Identity()
      )
      (2): Block(
        (pos_embed): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
        (norm1): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (attn): BiLevelRoutingAttention(
          (lepe): Conv2d(256, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=256)
          (router): TopkRouting(
            (emb): Identity()
            (routing_act): Softmax(dim=-1)
          )
          (kv_gather): KVGather()
          (qkv): QKVLinear(
            (qkv): Linear(in_features=256, out_features=768, bias=True)
          )
          (wo): Linear(in_features=256, out_features=256, bias=True)
          (kv_down): Identity()
          (attn_act): Softmax(dim=-1)
        )
        (norm2): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=256, out_features=768, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=768, out_features=256, bias=True)
        )
        (drop_path): Identity()
      )
      (3): Block(
        (pos_embed): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
        (norm1): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (attn): BiLevelRoutingAttention(
          (lepe): Conv2d(256, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=256)
          (router): TopkRouting(
            (emb): Identity()
            (routing_act): Softmax(dim=-1)
          )
          (kv_gather): KVGather()
          (qkv): QKVLinear(
            (qkv): Linear(in_features=256, out_features=768, bias=True)
          )
          (wo): Linear(in_features=256, out_features=256, bias=True)
          (kv_down): Identity()
          (attn_act): Softmax(dim=-1)
        )
        (norm2): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=256, out_features=768, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=768, out_features=256, bias=True)
        )
        (drop_path): Identity()
      )
      (4): Block(
        (pos_embed): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
        (norm1): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (attn): BiLevelRoutingAttention(
          (lepe): Conv2d(256, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=256)
          (router): TopkRouting(
            (emb): Identity()
            (routing_act): Softmax(dim=-1)
          )
          (kv_gather): KVGather()
          (qkv): QKVLinear(
            (qkv): Linear(in_features=256, out_features=768, bias=True)
          )
          (wo): Linear(in_features=256, out_features=256, bias=True)
          (kv_down): Identity()
          (attn_act): Softmax(dim=-1)
        )
        (norm2): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=256, out_features=768, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=768, out_features=256, bias=True)
        )
        (drop_path): Identity()
      )
      (5): Block(
        (pos_embed): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
        (norm1): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (attn): BiLevelRoutingAttention(
          (lepe): Conv2d(256, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=256)
          (router): TopkRouting(
            (emb): Identity()
            (routing_act): Softmax(dim=-1)
          )
          (kv_gather): KVGather()
          (qkv): QKVLinear(
            (qkv): Linear(in_features=256, out_features=768, bias=True)
          )
          (wo): Linear(in_features=256, out_features=256, bias=True)
          (kv_down): Identity()
          (attn_act): Softmax(dim=-1)
        )
        (norm2): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=256, out_features=768, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=768, out_features=256, bias=True)
        )
        (drop_path): Identity()
      )
      (6): Block(
        (pos_embed): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
        (norm1): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (attn): BiLevelRoutingAttention(
          (lepe): Conv2d(256, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=256)
          (router): TopkRouting(
            (emb): Identity()
            (routing_act): Softmax(dim=-1)
          )
          (kv_gather): KVGather()
          (qkv): QKVLinear(
            (qkv): Linear(in_features=256, out_features=768, bias=True)
          )
          (wo): Linear(in_features=256, out_features=256, bias=True)
          (kv_down): Identity()
          (attn_act): Softmax(dim=-1)
        )
        (norm2): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=256, out_features=768, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=768, out_features=256, bias=True)
        )
        (drop_path): Identity()
      )
      (7): Block(
        (pos_embed): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
        (norm1): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (attn): BiLevelRoutingAttention(
          (lepe): Conv2d(256, 256, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=256)
          (router): TopkRouting(
            (emb): Identity()
            (routing_act): Softmax(dim=-1)
          )
          (kv_gather): KVGather()
          (qkv): QKVLinear(
            (qkv): Linear(in_features=256, out_features=768, bias=True)
          )
          (wo): Linear(in_features=256, out_features=256, bias=True)
          (kv_down): Identity()
          (attn_act): Softmax(dim=-1)
        )
        (norm2): LayerNorm((256,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=256, out_features=768, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=768, out_features=256, bias=True)
        )
        (drop_path): Identity()
      )
    )
    (3): Sequential(
      (0): Block(
        (pos_embed): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512)
        (norm1): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
        (attn): AttentionLePE(
          (qkv): Linear(in_features=512, out_features=1536, bias=False)
          (attn_drop): Dropout(p=0.0, inplace=False)
          (proj): Linear(in_features=512, out_features=512, bias=True)
          (proj_drop): Dropout(p=0.0, inplace=False)
          (lepe): Conv2d(512, 512, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=512)
        )
        (norm2): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=512, out_features=1536, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=1536, out_features=512, bias=True)
        )
        (drop_path): Identity()
      )
      (1): Block(
        (pos_embed): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512)
        (norm1): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
        (attn): AttentionLePE(
          (qkv): Linear(in_features=512, out_features=1536, bias=False)
          (attn_drop): Dropout(p=0.0, inplace=False)
          (proj): Linear(in_features=512, out_features=512, bias=True)
          (proj_drop): Dropout(p=0.0, inplace=False)
          (lepe): Conv2d(512, 512, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=512)
        )
        (norm2): LayerNorm((512,), eps=1e-06, elementwise_affine=True)
        (mlp): Sequential(
          (0): Linear(in_features=512, out_features=1536, bias=True)
          (1): Identity()
          (2): GELU()
          (3): Linear(in_features=1536, out_features=512, bias=True)
        )
        (drop_path): Identity()
      )
    )
  )
  (norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (pre_logits): Identity()
  (head): Linear(in_features=512, out_features=1000, bias=True)
)


http://www.niftyadmin.cn/n/347023.html

相关文章

Spring MVC文件上传处理详解

Spring MVC文件上传处理详解 Spring MVC是Java Web开发中非常常用的框架之一,它提供了许多方便的功能。其中,文件上传是Web开发中常用的功能之一,本文将介绍如何使用Spring MVC处理文件上传以及相关代码实现。 文件上传的基本原理 在Web开发…

【微博-UITableViewController介绍 Objective-C语言】

一、加载xib文件的另外一种办法 1.我们说,加载xib,一种方式就是, CZFooterView *footerView = [[[NSBundle mainBundle] loadNibNamed:@“CZFooterView” owner:nil options:nil] lastObject]; 吧,这是一种方式, 2.另外一种方式,就是这里这种方式, UINIb *nib = [UI…

行业分析——半导体行业

半导体行业是现代高科技产业和新兴战略产业,是现代信息技术、电子技术、通信技术、信息化等产业的基础之一。我国政府先后制定了《中国集成电路产业发展规划》和《中国人工智能发展规划》,明确提出要支持半导体和人工智能等产业的发展,为半导…

欧拉角,四元数与旋转矩阵

目录 一、欧拉角二、四元数三、旋转矩阵四、Python下欧拉角、四元数和旋转矩阵的相互转换总结 一、欧拉角 对于在三维空间里的一个参考系,任何坐标系的取向,都可以用三个欧拉角(x,y,z)来表现。对于夹角的顺序和标记,夹角的两个轴的指定&…

【SpringBoot】k8s部署使用actuator做健康检查

环境介绍 开发依赖版本Spring Boot3.0.6JDK20 主要的pom依赖 <dependency><groupId>com.baomidou</groupId><artifactId>mybatis-plus-boot-starter</artifactId> </dependency> <dependency><groupId>org.springframework…

Java 基础语法学习笔记

一、Java语言概述 1.1 Java 的出现 Java之父 James Gosling&#xff0c;发明Java的原因是&#xff1a;C 语言缺少垃圾回收系统和可移植的安全性、分布程序设计和多线程功能等。 Java 是类C语言&#xff0c;Java 是一个纯粹的面向对象的程序设计语言。Java 舍弃了C语言中容易…

程序员不得不消化的基本概念:线程与协程,并行与并发

这是程序员的基本常识&#xff0c;这都搞不清楚&#xff0c;就不配为码农&#xff0c;就不配混CSDN。为人君者&#xff0c;招聘时可以以此为入门问题。 名词解释 在中文里&#xff0c;并发与并行很难望文生义&#xff0c;从字面上很难了解确切含义&#xff0c;貌似区别不大&am…

如何借助Kafka持久化存储K8S事件数据?

大家应该对 Kubernetes Events 并不陌生&#xff0c;特别是当你使用 kubectl describe 命令或 Event API 资源来了解集群中的故障时。 $ kubectl get events15m Warning FailedCreate …