一个很简单的 grpc 程序, 使用 go-zero 生成的代码, 主要用来代替客户端发送 http 请求, 并且返回 response body/response headers, 仅此而已, 压力测试时发现内存不断的上升!没有停止或者下降, 直到到操作系统 kill, 以下是主要代码:
type Request struct {
}
var httpClientPool = sync.Pool{New: func() interface{}{
	return &http.Client{
		Transport: &http.Transport{
			TLSClientConfig: &tls.Config{
				InsecureSkipVerify: true,
			},
		},
	}
}}
var bufPool = sync.Pool{
	New: func() interface{} {
		return bytes.NewBuffer([]byte{})
	},
}
func (l *RequestLogic) Request(in *cache.RequestMsg) (*cache.ResponseMsg, error) {
	var result = new(cache.ResponseMsg)
	req, err := http.NewRequest(in.Method, string(in.Url), bytes.NewReader(in.Body))
	if err != nil {
		result.ErrorMsg = err.Error()
		return result, errors.Wrap(err, "build request error")
	}
	for k, v := range in.Headers {
		req.Header.Add(k, v)
	}
	client := httpClientPool.Get().(*http.Client)
	defer httpClientPool.Put(client)
	client.Timeout = request_timeout
	if in.ProxyUrl != "" {
		u, _ := url.Parse(in.ProxyUrl)
		if u != nil {
			client.Transport.(*http.Transport).Proxy = http.ProxyURL(u)
		}
	}
	resp, err := client.Do(req)
	if err != nil {
		return result, errors.Wrap(err, "request error")
	}
	defer  resp.Body.Close()
	result.Url = []byte(resp.Request.URL.String())
	result.StatusCode = int32(resp.StatusCode)
	var w = bufPool.Get().(*bytes.Buffer)
	defer bufPool.Put(w)
	if err := gz.Compress(resp.Body, w); err != nil {
		return result, errors.Wrap(err, "gz compress error")
	}
	result.IsCompressor = true
	result.Body = w.Bytes()
	result.Headers = make(map[string]string)
	for k, values := range resp.Header {
		for _, v := range values {
			result.Headers[k] = v
		}
	}
	return result, nil
}
也通过 pprof 查看了, 都是 bytes 分配导致, 但是貌似不是我写的程序问题?
Showing nodes accounting for 930.40MB, 99.07% of 939.12MB total
Dropped 42 nodes (cum <= 4.70MB)
Showing top 10 nodes out of 38
      flat  flat%   sum%        cum   cum%
  760.34MB 80.96% 80.96%   760.34MB 80.96%  google.golang.org/protobuf/proto.MarshalOptions.marshal
  151.13MB 16.09% 97.06%   151.13MB 16.09%  bytes.makeSlice
   12.23MB  1.30% 98.36%    12.23MB  1.30%  google.golang.org/grpc/internal/transport.newBufWriter (inline)
    6.19MB  0.66% 99.02%     6.19MB  0.66%  bufio.NewReaderSize (inline)
    0.50MB 0.053% 99.07%    18.92MB  2.01%  google.golang.org/grpc/internal/transport.NewServerTransport
         0     0% 99.07%   150.62MB 16.04%  bytes.(*Buffer).Write
         0     0% 99.07%   151.13MB 16.09%  bytes.(*Buffer).grow
         0     0% 99.07%   150.62MB 16.04%  compress/flate.(*Writer).Write
         0     0% 99.07%   150.62MB 16.04%  compress/flate.(*compressor).deflate
         0     0% 99.07%   150.62MB 16.04%  compress/flate.(*compressor).write
这咋优化呀?还是直接放弃掉 rpc, 构建一个 http 代理服务器?
|  |      1Mohanson      2021-11-20 18:40:39 +08:00  2 盲猜不停的从池中取 buffer -> append 数据 -> 放回池子 -> 继续取 buffer -> 继续 append 数据. > 但是貌似不是我写的程序问题? 肯定是你写的程序问题, 不然咧? 你是觉得你发现了标准库, 编译器, 操作系统, 还是 CPU 的 BUG? | 
|  |      2Nitroethane      2021-11-20 18:50:05 +08:00 bufPool.Put() 方法之前应该把 bytes.Buffer reset 一下。还有压缩数据那块不知道你怎么写的,可能也存在问题 | 
|      3vvhhaaattt      2021-11-20 19:47:13 +08:00 via Android http.Client 内部有实现连接池,一般是初始化一个全局变量来用的吧。 不过对楼主这个问题,影响应该不是很大感觉。 |