Skip to content

Instantly share code, notes, and snippets.

@7shi
Last active August 23, 2025 06:51
Show Gist options
  • Save 7shi/cf7fb4c165a9b0032242b2b0b5ec0c90 to your computer and use it in GitHub Desktop.
Save 7shi/cf7fb4c165a9b0032242b2b0b5ec0c90 to your computer and use it in GitHub Desktop.
Scalar Gravity Tensor

テンソル・ペアリング・アルゴリズム実装仕様

概要

この実装は、ファインマンのスカラー重力理論で必要となるテンソル量の組み合わせから、独立なスカラー量を生成するためのアルゴリズムです。

参照: https://x.com/7shi/status/1959100031790981364

物理的背景

問題設定

$h_{\mu\nu}$ を 2 階対称テンソルとし、添字の上げ下げはミンコフスキー計量 $\eta$ で行います。定義:

  • $h = {h^{\mu}}_{\mu}$(トレース)
  • $(\partial h) _ {\alpha} = \partial _ {\mu}{h^{\mu}} _ {\alpha}$(発散)

独立項の数

$h_{\mu\nu}$ について $n$ 次で、 $\partial _ {\lambda}h _ {\mu\nu}$ について 2 次のスカラー量で独立な項の数を $S_n$ とすると:

  • $S_1 = 16$(既知)
  • $S_2 = 43$
  • $S_3 = 93$
  • $S_4 = 187$
  • $S_5 = 344$
  • $S_6 = 607$

アルゴリズムの概要

このアルゴリズムは以下の手順で独立なスカラー量を生成します:

  1. テンソル組み合わせ生成: 与えられた $h$$\partial h$ の個数から、すべてのインデックスペアリングを生成
  2. 第1段階正規化: normalize2による標準化(インデックス付け替えとトレース処理)
  3. 対称性フィルタリング: filter_pairingsによる同一テンソル間の対称性を考慮した重複除去

主要クラス

Tensor クラス

テンソル量を表現するクラス:

class Tensor:
    def __init__(self, partial, index_abs, index_rel)

パラメータ:

  • partial: Boolean - 微分テンソル($\partial h$)かどうか
  • index_abs: int - 絶対的なテンソル番号
  • index_rel: int - 相対的なテンソル番号

インデックス構造:

  • 非微分テンソル($h$): [0, 0] (2つの対称インデックス)
  • 微分テンソル($\partial h$): [0, 1, 1] (1つの微分インデックス + 2つの対称インデックス)

Index クラス

個別のテンソルインデックスを表現:

class Index:
    def __init__(self, tensor, index)

主要アルゴリズム

1. ペアリング生成 (get_pairings1)

すべてのインデックス間のペアリング(縮約)を生成します。キャッシュ機能により、同一の部分構造の重複計算を避けます。

2. 第1段階正規化 (normalize2)

処理手順:

  1. トレース分離: 同一インデックスのペア ($h _ {\mu\mu}$) を分離
  2. インデックスソート: 非トレースペアを .index でソート
  3. 出現順マッピング: テンソル番号を出現順で付け直し
  4. 最終正規化: normalize1で標準形に変換

テンソル番号付け替えロジック:

  • H テンソル: hmap.setdefault(i, len(hmap))
  • P テンソル: pmap.setdefault(i, len(pmap) + hnum)

3. 対称性フィルタリング (filter_pairings)

対称性の考慮:

  • H-H対称性: 同種の非微分テンソル間の交換対称性
  • P-P対称性: 同種の微分テンソル間の交換対称性
  • トレースペア除外: トレースを持つテンソルは対称性操作から除外

アルゴリズム:

  1. filter_withで第1段階正規化を適用
  2. 各ペアリングについて:
    • トレースペアを除外してテンソル番号リストを作成
    • H同士、P同士の組み合わせ(combinations(hs, 2)等)を生成
    • すべての組み合わせの順列を試行
    • 重複チェックのためall_setに登録

性能特性

計算量

  • h=2,p=2: 78 → 48 → 43 (最終効果 1.8倍削減)
  • h=3,p=2: 322 → 155 → 93 (最終効果 3.5倍削減)
  • h=4,p=2: 1242 → 475 → 187 (最終効果 6.6倍削減)

メモリ効率化

  • レベル別キャッシュによる重複計算回避
  • Set-based重複検出による高速フィルタリング

入出力仕様

入力

  • h: 非微分テンソル($h_{\mu\nu}$)の個数
  • p: 微分テンソル($\partial_{\lambda}h_{\mu\nu}$)の個数

出力

  • 独立なスカラー量の個数 $S_n$ (n = h)

検証済み結果

h p 初期ペアリング filter_with filter_pairings 所要時間
2 0 2 2 2 一瞬
0 2 6 5 5 一瞬
1 2 19 16 16 一瞬
2 2 78 48 43 一瞬
3 2 322 155 93 1秒未満
4 2 1,242 475 187 6秒
5 2 4,468 1,399 344 22秒
6 2 15,098 3,924 607 32分

最終的な結果(filter_pairings)は理論予測と完全に一致しています。

対称性を考慮しない総当たりのペアリングは (h+3)! となります。初期ペアリングでは、数え上げの際にある程度の重複を排除しています。

Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "fc65dfeb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[0[0, 0], 1[0, 0], 2[0, 1, 1], 3[0, 1, 1]]"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"class Tensor:\n",
" def __init__(self, partial, index_abs, index_rel):\n",
" self.partial = partial\n",
" self.index_abs = index_abs\n",
" self.index_rel = index_rel\n",
" if partial:\n",
" self.indices = [0, 1, 1]\n",
" else:\n",
" self.indices = [0, 0]\n",
" self.initial = self.indices.copy()\n",
"\n",
" def __repr__(self):\n",
" return f\"{self.index_abs}{self.indices}\"\n",
"\n",
" def copy(self):\n",
" copied = Tensor(self.partial, self.index_abs, self.index_rel)\n",
" copied.indices = self.indices.copy()\n",
" return copied\n",
"\n",
" def is_initial(self):\n",
" return self.indices == self.initial\n",
" \n",
" def is_available(self):\n",
" return len(self.indices) > 0\n",
"\n",
" def get(self, index):\n",
" try:\n",
" i = self.indices.index(index)\n",
" self.indices.pop(i)\n",
" return True\n",
" except ValueError:\n",
" return False\n",
"\n",
" def get_first(self):\n",
" return self.indices.pop(0)\n",
"\n",
"def create_tensors(h, p):\n",
" hs = [Tensor(False, i, i) for i in range(h)]\n",
" ps = [Tensor(True, h + i, i) for i in range(p)]\n",
" return hs + ps\n",
"\n",
"create_tensors(2, 2)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "577427f0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[0, 0, 1, 1, 2.0, 2.1, 2.1, 3.0, 3.1, 3.1]"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"class Index:\n",
" def __init__(self, tensor, index):\n",
" self.tensor = tensor\n",
" self.index = index\n",
"\n",
" def to_tuple(self):\n",
" return (self.tensor.index_abs, self.index)\n",
"\n",
" def __repr__(self):\n",
" s = str(self.tensor.index_abs)\n",
" if self.tensor.partial:\n",
" s += f\".{self.index}\"\n",
" return s\n",
"\n",
" def __hash__(self):\n",
" return hash(self.to_tuple())\n",
"\n",
" def __eq__(self, other):\n",
" if not isinstance(other, Index):\n",
" return False\n",
" return self.to_tuple() == other.to_tuple()\n",
"\n",
" def __lt__(self, other):\n",
" if not isinstance(other, Index):\n",
" return NotImplemented\n",
" return self.to_tuple() < other.to_tuple()\n",
"\n",
"def extract_indices(tensors):\n",
" return [Index(t, i) for t in tensors for i in t.indices]\n",
"\n",
"extract_indices(create_tensors(2, 2))"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "15e16b9c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(2, 0) (0, [0[0], 1[0, 0]], [0, 1])\n",
"(0, 2) (0.0, [0[1, 1], 1[0, 1, 1]], [0, 2, 3])\n",
"(1, 2) (0, [0[0], 1[0, 1, 1], 2[0, 1, 1]], [0, 1, 2])\n",
"(2, 2) (0, [0[0], 1[0, 0], 2[0, 1, 1], 3[0, 1, 1]], [0, 1, 3, 4])\n"
]
}
],
"source": [
"def create_tensor_combinations1(tensors):\n",
" ths = [t.copy() for t in tensors if t.is_available() and not t.partial]\n",
" tps = [t.copy() for t in tensors if t.is_available() and t.partial]\n",
" head = None\n",
" ts = []\n",
" used = set()\n",
" offset = 0\n",
" if ths:\n",
" head = Index(ths[0], ths[0].get_first())\n",
" hs = extract_indices(ths)\n",
" last_t = None\n",
" for i in range(len(hs)):\n",
" index = hs[i]\n",
" if last_t and last_t.is_initial() and last_t != index.tensor:\n",
" break\n",
" s = repr(index)\n",
" if s not in used:\n",
" ts.append(i)\n",
" used.add(s)\n",
" last_t = index.tensor\n",
" offset = len(hs)\n",
" if tps:\n",
" if not head:\n",
" head = Index(tps[0], tps[0].get_first())\n",
" ps = extract_indices(tps)\n",
" last_t = None\n",
" for i in range(len(ps)):\n",
" index = ps[i]\n",
" if last_t and last_t.is_initial() and last_t != index.tensor:\n",
" break\n",
" s = repr(index)\n",
" if s not in used:\n",
" ts.append(offset + i)\n",
" used.add(s)\n",
" last_t = index.tensor\n",
" return head, ths + tps, ts\n",
"\n",
"def test_create_tensor_combinations_1(h, p):\n",
" tensors = create_tensors(h, p)\n",
" print((h, p), create_tensor_combinations1(tensors))\n",
"\n",
"test_create_tensor_combinations_1(2, 0)\n",
"test_create_tensor_combinations_1(0, 2)\n",
"test_create_tensor_combinations_1(1, 2)\n",
"test_create_tensor_combinations_1(2, 2)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "7b6bf246",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[(0.1, 0.0), (1.1, 1.1), (1.0, 0.1)] -> [(0.0, 0.1), (0.1, 1.0), (1.1, 1.1)]\n"
]
}
],
"source": [
"def normalize1(pairing):\n",
" return sorted(tuple(sorted(p)) for p in pairing)\n",
"\n",
"def test_normalize1(pairing):\n",
" print(pairing, \"->\", normalize1(pairing))\n",
"\n",
"test_normalize1([(0.1, 0.0), (1.1, 1.1), (1.0, 0.1)])"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "f8614093",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(2, 0) 2 [((0, 0), (1, 1)), ((0, 1), (0, 1))]\n",
"(0, 2) 6 [((0.0, 0.1), (0.1, 1.0), (1.1, 1.1)), ((0.0, 0.1), (0.1, 1.1), (1.0, 1.1)), ((0.0, 1.0), (0.1, 0.1), (1.1, 1.1)), ((0.0, 1.0), (0.1, 1.1), (0.1, 1.1)), ((0.0, 1.1), (0.1, 0.1), (1.0, 1.1)), ((0.0, 1.1), (0.1, 1.0), (0.1, 1.1))]\n",
"(1, 2) 19\n",
"(2, 2) 78\n",
"(3, 2) 322\n",
"(4, 2) 1242\n",
"(5, 2) 4468\n",
"(6, 2) 15098\n"
]
}
],
"source": [
"def get_pairings1(tensors):\n",
" cache = {}\n",
"\n",
" def f(ac, tensors, level):\n",
" head, tensors, ts = create_tensor_combinations1(tensors)\n",
" if not head:\n",
" yield ac\n",
" return\n",
" for i in ts:\n",
" tc = [t.copy() for t in tensors]\n",
" indices = extract_indices(tc)\n",
" ti = indices[i]\n",
" ti.tensor.get(ti.index)\n",
" ac2 = tuple(normalize1([*ac, (head, ti)]))\n",
" cache_level = cache.setdefault(level, set())\n",
" if ac2 not in cache_level:\n",
" cache_level.add(ac2)\n",
" yield from f(ac2, tc, level + 1)\n",
"\n",
" return list(f([], tensors, 0))\n",
"\n",
"def test_get_pairings1(h, p, show=True):\n",
" tensors = create_tensors(h, p)\n",
" results = get_pairings1(tensors)\n",
" if show:\n",
" print((h, p), len(results), results)\n",
" else:\n",
" print((h, p), len(results))\n",
"\n",
"test_get_pairings1(2, 0)\n",
"test_get_pairings1(0, 2)\n",
"for index in range(1, 7):\n",
" test_get_pairings1(index, 2, False)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c7a9643a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"h = 2, p = 2\n",
"50: ((0, 2.0), (0, 3.0), (1, 3.1), (1, 3.1), (2.1, 2.1)) -> [(0, 2.0), (0, 3.0), (1, 3.1), (1, 3.1), (2.1, 2.1)]\n",
"51: ((0, 2.0), (0, 3.1), (1, 1), (2.1, 2.1), (3.0, 3.1)) -> [(0, 0), (1, 2.0), (1, 3.1), (2.1, 2.1), (3.0, 3.1)]\n",
"52: ((0, 2.0), (0, 3.1), (1, 1), (2.1, 3.0), (2.1, 3.1)) -> [(0, 0), (1, 2.0), (1, 3.1), (2.1, 3.0), (2.1, 3.1)]\n",
"53: ((0, 2.0), (0, 3.1), (1, 2.1), (1, 2.1), (3.0, 3.1)) -> [(0, 2.0), (0, 3.1), (1, 2.1), (1, 2.1), (3.0, 3.1)]\n",
"54: ((0, 2.0), (0, 3.1), (1, 2.1), (1, 3.0), (2.1, 3.1)) -> [(0, 2.0), (0, 3.1), (1, 2.1), (1, 3.0), (2.1, 3.1)]\n",
"55: ((0, 2.0), (0, 3.1), (1, 2.1), (1, 3.1), (2.1, 3.0)) -> [(0, 2.0), (0, 3.1), (1, 2.1), (1, 3.1), (2.1, 3.0)]\n",
"56: ((0, 2.0), (0, 3.1), (1, 3.0), (1, 3.1), (2.1, 2.1)) -> [(0, 2.0), (0, 3.1), (1, 3.0), (1, 3.1), (2.1, 2.1)]\n",
"57: ((0, 2.1), (0, 2.1), (1, 1), (2.0, 3.0), (3.1, 3.1)) -> [(0, 0), (1, 3.1), (1, 3.1), (2.0, 3.0), (2.1, 2.1)]\n",
"58: ((0, 2.1), (0, 2.1), (1, 1), (2.0, 3.1), (3.0, 3.1)) -> [(0, 0), (1, 2.1), (1, 2.1), (2.0, 3.1), (3.0, 3.1)]\n",
"59: ((0, 2.1), (0, 2.1), (1, 2.0), (1, 3.0), (3.1, 3.1)) -> [(0, 2.0), (0, 3.0), (1, 3.1), (1, 3.1), (2.1, 2.1)]\n"
]
}
],
"source": [
"def get_h_p(pairings):\n",
" h = -1\n",
" p = -1\n",
" for p0, p1 in pairings:\n",
" if (t0 := p0.tensor).partial:\n",
" p = max(p, t0.index_abs)\n",
" else:\n",
" h = max(h, t0.index_abs)\n",
" if (t1 := p1.tensor).partial:\n",
" p = max(p, t1.index_abs)\n",
" else:\n",
" h = max(h, t1.index_abs)\n",
" return h + 1, max(0, p - h)\n",
"\n",
"def apply_swap_map(pairing, swap_map, hnum, pnum):\n",
" tmap = {}\n",
" for i in range(hnum + pnum):\n",
" v = swap_map.get(i, i)\n",
" if i < hnum:\n",
" tmap[i] = Tensor(False, v, v)\n",
" else:\n",
" tmap[i] = Tensor(True, v, v - hnum)\n",
" results = []\n",
" for p0, p1 in pairing:\n",
" i0 = Index(tmap[p0.tensor.index_abs], p0.index)\n",
" i1 = Index(tmap[p1.tensor.index_abs], p1.index)\n",
" results.append((i0, i1))\n",
" return results\n",
"\n",
"def normalize2(pairing):\n",
" p1 = []\n",
" p2 = []\n",
" for p in pairing:\n",
" if p[0] == p[1]:\n",
" p1.append(p)\n",
" else:\n",
" p2.append(tuple(sorted(p, key=lambda x: x.index)))\n",
" p2.sort(key=lambda x: (x[0].index, x[1].index))\n",
"\n",
" hnum, pnum = get_h_p(pairing)\n",
" hmap = {}\n",
" pmap = {}\n",
"\n",
" def add_map(index):\n",
" i = index.tensor.index_abs\n",
" if i < hnum:\n",
" hmap.setdefault(i, len(hmap))\n",
" else:\n",
" pmap.setdefault(i, len(pmap) + hnum)\n",
"\n",
" for i0, i1 in p1:\n",
" add_map(i0)\n",
" add_map(i1)\n",
" for i0, i1 in p2:\n",
" add_map(i0)\n",
" add_map(i1)\n",
"\n",
" hmap.update(pmap)\n",
" return normalize1(apply_swap_map(pairing, hmap, hnum, pnum))\n",
"\n",
"def test_normalize2(tensors, start, end):\n",
" pairings = get_pairings1(tensors)\n",
" h, p = get_h_p(pairings[0])\n",
" print(f\"h = {h}, p = {p}\")\n",
" for i, p in enumerate(pairings[start:end], start):\n",
" print(f\"{i}: {p} -> {normalize2(p)}\")\n",
"\n",
"test_normalize2(create_tensors(2, 2), 50, 60)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "c96bf43c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(2, 0) 2 2\n",
"(0, 2) 6 5\n",
"(1, 2) 19 16\n",
"(2, 2) 78 48\n",
"(3, 2) 322 155\n",
"(4, 2) 1242 475\n",
"(5, 2) 4468 1399\n",
"(6, 2) 15098 3924\n"
]
}
],
"source": [
"def filter_with(pairings, f=normalize2):\n",
" pairings = set(tuple(f(p)) for p in pairings)\n",
" return sorted(pairings)\n",
"\n",
"def test_filter_with(h, p):\n",
" tensors = create_tensors(h, p)\n",
" pairings = get_pairings1(tensors)\n",
" filtered = filter_with(pairings)\n",
" print((h, p), len(pairings), len(filtered))\n",
"\n",
"test_filter_with(2, 0)\n",
"test_filter_with(0, 2)\n",
"for index in range(1, 7):\n",
" test_filter_with(index, 2)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "82a5b307",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(2, 0) 2 2\n",
"(0, 2) 6 5\n",
"(1, 2) 19 16\n",
"(2, 2) 78 43\n",
"(3, 2) 322 93\n",
"(4, 2) 1242 187\n",
"(5, 2) 4468 344\n",
"(6, 2) 15098 607\n"
]
}
],
"source": [
"from itertools import combinations\n",
"\n",
"def filter_pairings(pairings):\n",
" pairings = filter_with(pairings)\n",
" h, p = get_h_p(pairings[0])\n",
" all_set = set()\n",
" results = []\n",
" for pairing in pairings:\n",
" tupled_pairing = tuple(pairing)\n",
" if tupled_pairing in all_set:\n",
" continue\n",
" results.append(pairing)\n",
"\n",
" hs = list(range(0, h))\n",
" ps = list(range(h, h + p))\n",
" for p0, p1 in pairing:\n",
" if p0 == p1:\n",
" i = p0.tensor.index_abs\n",
" if i in hs:\n",
" hs.remove(i)\n",
" elif i in ps:\n",
" ps.remove(i)\n",
" hs_combinations = list(combinations(hs, 2)) if len(hs) > 1 else []\n",
" ps_combinations = list(combinations(ps, 2)) if len(ps) > 1 else []\n",
" swap_nums = hs_combinations + ps_combinations\n",
" if swap_nums:\n",
" for r in range(len(swap_nums)):\n",
" for swap in combinations(swap_nums, r + 1):\n",
" swapped_pairing = pairing\n",
" for fi, ti in swap:\n",
" swap_map = {fi: ti, ti: fi}\n",
" swapped_pairing = apply_swap_map(swapped_pairing, swap_map, h, p)\n",
" all_set.add(tuple(normalize1(swapped_pairing)))\n",
" else:\n",
" all_set.add(tupled_pairing)\n",
"\n",
" return results\n",
"\n",
"def test_filter_pairings(h, p):\n",
" tensors = create_tensors(h, p)\n",
" pairings = get_pairings1(tensors)\n",
" filtered = filter_pairings(pairings)\n",
" print((h, p), len(pairings), len(filtered))\n",
"\n",
"test_filter_pairings(2, 0)\n",
"test_filter_pairings(0, 2)\n",
"for index in range(1, 7):\n",
" test_filter_pairings(index, 2)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "tensor",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment